Search criteria
121 vulnerabilities found for h700s by netapp
VAR-202004-2191
Vulnerability from variot - Updated: 2024-07-23 22:10In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. Solution:
For information on upgrading Ansible Tower, reference the Ansible Tower Upgrade and Migration Guide: https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/ index.html
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update Advisory ID: RHSA-2020:3247-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2020:3247 Issue date: 2020-08-04 CVE Names: CVE-2017-18635 CVE-2019-8331 CVE-2019-10086 CVE-2019-13990 CVE-2019-17195 CVE-2019-19336 CVE-2020-7598 CVE-2020-10775 CVE-2020-11022 CVE-2020-11023 =====================================================================
- Summary:
Updated ovirt-engine packages that fix several bugs and add various enhancements are now available.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch, x86_64
- Description:
The ovirt-engine package provides the Red Hat Virtualization Manager, a centralized management platform that allows system administrators to view and manage virtual machines. The Manager provides a comprehensive range of features including search capabilities, resource management, live migrations, and virtual infrastructure provisioning.
The Manager is a JBoss Application Server application that provides several interfaces through which the virtual environment can be accessed and interacted with, including an Administration Portal, a VM Portal, and a Representational State Transfer (REST) Application Programming Interface (API).
A list of bugs fixed in this update is available in the Technical Notes book:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht ml-single/technical_notes
Security Fix(es):
-
apache-commons-beanutils: does not suppresses the class property in PropertyUtilsBean by default (CVE-2019-10086)
-
libquartz: XXE attacks via job description (CVE-2019-13990)
-
novnc: XSS vulnerability via the messages propagated to the status field (CVE-2017-18635)
-
bootstrap: XSS in the tooltip or popover data-template attribute (CVE-2019-8331)
-
nimbus-jose-jwt: Uncaught exceptions while parsing a JWT (CVE-2019-17195)
-
ovirt-engine: response_type parameter allows reflected XSS (CVE-2019-19336)
-
nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or proto payload (CVE-2020-7598)
-
ovirt-engine: Redirect to arbitrary URL allows for phishing (CVE-2020-10775)
-
Cross-site scripting due to improper injQuery.htmlPrefilter method (CVE-2020-11022)
-
jQuery: passing HTML containing elements to manipulation methods could result in untrusted code execution (CVE-2020-11023)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1080097 - [RFE] Allow editing disks details in the Disks tab 1325468 - [RFE] Autostart of VMs that are down (with Engine assistance - Engine has to be up) 1358501 - [RFE] multihost network change - notify when done 1427717 - [RFE] Create and/or select affinity group upon VM creation. 1475774 - RHV-M requesting four GetDeviceListVDSCommand when editing storage domain 1507438 - not able to deploy new rhvh host when "/tmp" is mounted with "noexec" option 1523835 - Hosted-Engine: memory hotplug does not work for engine vm 1527843 - [Tracker] Q35 chipset support (with seabios) 1529042 - [RFE] Changing of Cluster CPU Type does not trigger config update notification 1535796 - Undeployment of HE is not graceful 1546838 - [RFE] Refuse to deploy on localhost.localdomain 1547937 - [RFE] Live Storage Migration progress bar. 1585986 - [HE] When lowering the cluster compatibility, we need to force update the HE storage OVF store to ensure it can start up (migration will not work). 1593800 - [RFE] forbid new mac pools with overlapping ranges 1596178 - inconsistent display between automatic and manual Pool Type 1600059 - [RFE] Add by default a storage lease to HA VMs 1610212 - After updating to RHV 4.1 while trying to edit the disk, getting error "Cannot edit Virtual Disk. Cannot edit Virtual Disk. Disk extension combined with disk compat version update isn't supported. Please perform the updates separately." 1611395 - Unable to list Compute Templates in RHV 4.2 from Satellite 6.3.2 1616451 - [UI] add a tooltip to explain the supported matrix for the combination of disk allocation policies, formats and the combination result 1637172 - Live Merge hung in the volume deletion phase, leaving snapshot in a LOCKED state 1640908 - Javascript Error popup when Managing StorageDomain with LUNs and 400+ paths 1642273 - [UI] - left nav border highlight missing in RHV 1647440 - [RFE][UI] Provide information about the VM next run 1648345 - Jobs are not properly cleaned after a failed task. 1650417 - HA is broken for VMs having disks in NFS storage domain because of Qemu OFD locking 1650505 - Increase of ClusterCompatibilityVersion to Cluster with virtual machines with outstanding configuration changes, those changes will be reverted 1651406 - [RFE] Allow Maintenance of Host with Enforcing VM Affinity Rules (hard affinity) 1651939 - a new size of the direct LUN not updated in Admin Portal 1654069 - [Downstream Clone] [UI] - grids bottom scrollbar hides bottom row 1654889 - [RFE] Support console VNC for mediated devices 1656621 - Importing VM OVA always enables 'Cloud-Init/Sysprep' 1658101 - [RESTAPI] Adding ISO disables serial console 1659161 - Unable to edit pool that is delete protected 1660071 - Regression in Migration of VM that starts in pause mode: took 11 hours 1660644 - Concurrent LSMs of the same disk can be issued via the REST-API 1663366 - USB selection option disabled even though USB support is enabled in RHV-4.2 1664479 - Third VM fails to get migrated when host is placed into maintenance mode 1666913 - [UI] warn users about different "Vdsm Name" when creating network with a fancy char or long name 1670102 - [CinderLib] - openstack-cinder and cinderlib packages are not installed on ovirt-engine machine 1671876 - "Bond Active Slave" parameter on RHV-M GUI shows an incorrect until Refresh Caps 1679039 - Unable to upload image through Storage->Domain->Disk because of wrong DC 1679110 - [RFE] change Admin Portal toast notifications location 1679471 - [ja, de, es, fr, pt_BR] The console client resources page shows truncated title for some locales 1679730 - Warn about host IP addresses outside range 1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute 1686650 - Memory snapshots' deletion logging unnecessary WARNINGS in engine.log 1687345 - Snapshot with memory volumes can fail if the memory dump takes more than 180 seconds 1690026 - [RFE] - Creating an NFS storage domain the engine should let the user specify exact NFS version v4.0 and not just v4 1690155 - Disk migration progress bar not clearly visible and unusable. 1690475 - When a live storage migration fails, the auto generated snapshot does not get removed 1691562 - Cluster level changes are not increasing VMs generation numbers and so a new OVF_STORE content is not copied to the shared storage 1692592 - "Enable menu to select boot device shows 10 device listed with cdrom at 10th slot but when selecting 10 option the VM took 1 as option and boot with disk 1693628 - Engine generates too many updates to vm_dynamic table due to the session change 1693813 - Do not change DC level if there are VMs running/paused with older CL. 1695026 - Failure in creating snapshots during "Live Storage Migration" can result in a nonexistent snapshot 1695635 - [RFE] Improve Host Drop-down menu in different Dialogs (i.e. Alphabetical sort of Hosts in Remove|New StorageDomains) 1696245 - [RFE] Allow full customization while cloning a VM 1696669 - Build bouncycastle for RHV 4.4 RHEL 8 1696676 - Build ebay-cors-filter for RHV 4.4 RHEL 8 1698009 - Build openstack-java-sdk for RHV 4.4 RHEL 8 1698102 - Print a warning message to engine-setup, which highlights that other clusters than the Default one are not modified to use ovirt-provider-ovn as the default network provider 1700021 - [RFE] engine-setup should warn and prompt if ca.pem is missing but other generated pki files exist 1700036 - [RFE] Add RedFish API for host power management for RHEV 1700319 - VM is going to pause state with "storage I/O error". 1700338 - [RFE] Alternate method to configure the email Event Notifier for a user in RHV through API (instead of RHV GUI) 1700725 - [scale] RHV-M runs out of memory due to to much data reported by the guest agent 1700867 - Build makeself for RHV 4.4 RHEL 8 1701476 - Build unboundid-ldapsdk for RHV 4.4 RHEL 8 1701491 - Build RHV-M 4.4 - RHEL 8 1701522 - Build ovirt-imageio-proxy for RHV 4.4 / RHEL 8 1701528 - Build / Tag python-ovsdbapp for RHV 4.4 RHEL 8 1701530 - Build / Tag ovirt-cockpit-sso for RHV 4.4 RHEL 8 1701531 - Build / Tag ovirt-engine-api-explorer for RHV 4.4 RHEL 8 1701533 - Build / Tag ovirt-engine-dwh for RHV 4.4 / RHEL 8 1701538 - Build / Tag vdsm-jsonrpc-java for RHV 4.4 RHEL 8 1701544 - Build rhvm-dependencies for RHV 4.4 RHEL 8 1702310 - Build / Tag ovirt-engine-ui-extensions for RHV 4.4 RHEL 8 1702312 - Build ovirt-log-collector for RHV 4.4 RHEL 8 1703112 - PCI address of NICs are not stored in the database after a hotplug of passthrough NIC resulting in change of network device name in VM after a reboot 1703428 - VMs migrated from KVM to RHV show warning 'The latest guest agent needs to be installed and running on the guest' 1707225 - [cinderlib] Cinderlib DB is missing a backup and restore option 1708624 - Build rhvm-setup-plugins for RHV 4.4 - RHEL 8 1710491 - No EVENT_ID is generated in /var/log/ovirt-engine/engine.log when VM is rebooted from OS level itself. 1711006 - Metrics installation fails during the execution of playbook ovirt-metrics-store-installation if the environment is not having DHCP 1712255 - Drop 4.1 datacenter/cluster level 1712746 - [RFE] Ignition support for ovirt vms 1712890 - engine-setup should check for snapshots in unsupported CL 1714528 - Missing IDs on cluster upgrade buttons 1714633 - Using more than one asterisk in the search string is not working when searching for users. 1714834 - Cannot disable SCSI passthrough using API 1715725 - Sending credentials in query string logs them in ovirt-request-logs 1716590 - [RFE][UX] Make Cluster-wide "Custom serial number policy" value visible at VM level 1718818 - [RFE] Enhance local disk passthrough 1720686 - Tag ovirt-scheduler-proxy for RHV 4.4 RHEL 8 1720694 - Build ovirt-engine-extension-aaa-jdbc for RHV 4.4 RHEL 8 1720795 - New guest tools are available mark in case of guest tool located on Data Domain 1724959 - RHV recommends reporting issues to GitHub rather than access.redhat.com (ovirt->RHV rebrand glitch?) 1727025 - NPE in DestroyImage endAction during live merge leaving a task in DB for hours causing operations depending on host clean tasks to fail as Deactivate host/StopSPM/deactivate SD 1728472 - Engine reports network out of sync due to ipv6 default gateway via ND RA on a non default route network. 1729511 - engine-setup fails to upgrade to 4.3 with Unicode characters in CA subject 1729811 - [scale] updatevmdynamic broken if too many users logged in - psql ERROR: value too long for type character varying(255) 1730264 - VMs will fail to start if the vnic profile attached is having port mirroring enabled and have name greater than 15 characters 1730436 - Snapshot creation was successful, but snapshot remains locked 1731212 - RHV 4.4 landing page does not show login or allow scrolling. 1731590 - Cannot preview snapshot, it fails and VM remains locked. 1733031 - [RFE] Add warning when importing data domains to newer DC that may trigger SD format upgrade 1733529 - Consume python-ovsdbapp dependencies from OSP in RHEL 8 RHV 4.4 1733843 - Export to OVA fails if VM is running on the Host doing the export 1734839 - Unable to start guests in our Power9 cluster without running in headless mode. 1737234 - Attach a non-existent ISO to vm by the API return 201 and marks the Attach CD checkbox as ON 1737684 - Engine deletes the leaf volume when SnapshotVDSCommand timed out without checking if the volume is still used by the VM 1740978 - [RFE] Warn or Block importing VMs/Templates from unsupported compatibility levels. 1741102 - host activation causes RHHI nodes to lose the quorum 1741271 - Move/Copy disk are blocked if there is less space in source SD than the size of the disk 1741625 - VM fails to be re-started with error: Failed to acquire lock: No space left on device 1743690 - Commit and Undo buttons active when no snapshot selected 1744557 - RHV 4.3 throws an exception when trying to access VMs which have snapshots from unsupported compatibility levels 1745384 - [IPv6 Static] Engine should allow updating network's static ipv6gateway 1745504 - Tag rhv-log-collector-analyzer for RHV 4.4 RHEL 8 1746272 - [BREW BUILD ENABLER] Build the oVirt Ansible roles for RHV 4.4.0 1746430 - [Rebase] Rebase v2v-conversion-host for RHV 4.4 Engine 1746877 - [Metrics] Rebase bug - for the 4.4 release on EL8 1747772 - Extra white space at the top of webadmin dialogs 1749284 - Change the Snapshot operation to be asynchronous 1749944 - teardownImage attempts to deactivate in-use LV's rendering the VM disk image/volumes in locked state. 1750212 - MERGE_STATUS fails with 'Invalid UUID string: mapper' when Direct LUN that already exists is hot-plugged 1750348 - [Tracking] rhvm-branding-rhv for RHV 4.4 1750357 - [Tracking] ovirt-web-ui for RHV 4.4 1750371 - [Tracking] ovirt-engine-ui-extensions for RHV 4.4 1750482 - From VM Portal, users cannot create Operating System Windows VM. 1751215 - Unable to change Graphical Console of HE VM. 1751268 - add links to Insights to landing page 1751423 - Improve description of shared memory statistics and remove unimplemented memory metrics from API 1752890 - Build / Tag ovirt-engine-extension-aaa-ldap for RHV 4.4 RHEL 8 1752995 - [RFE] Need to be able to set default console option 1753629 - Build / Tag ovirt-engine-extension-aaa-misc for RHV 4.4 RHEL 8 1753661 - Build / Tag ovirt-engine-extension-logger-log4j got RHV 4.4 / RHEl 8 1753664 - Build ovirt-fast-forward-upgrade for RHV 4.4 /RHEL 8 support 1754363 - [Scale] Engine generates excessive amount of dns configuration related sql queries 1754490 - RHV Manager cannot start on EAP 7.2.4 1755412 - Setting "oreg_url: registry.redhat.io" fails with error 1758048 - clone(as thin) VM from template or create snapshot fails with 'Requested capacity 1073741824 < parent capacity 3221225472 (volume:1211)' 1758289 - [Warn] Duplicate chassis entries in southbound database if the host is down while removing the host from Manager 1762281 - Import of OVA created from template fails with java.lang.NullPointerException 1763992 - [RFE] Show "Open Console" as the main option in the VM actions menu 1764289 - Document details how each fence agent can be configured in RESTAPI 1764791 - CVE-2019-17195 nimbus-jose-jwt: Uncaught exceptions while parsing a JWT 1764932 - [BREW BUILD ENABLER] Build the ansible-runner-service for RHV 4.4 1764943 - Create Snapshot does not proceed beyond CreateVolume 1764959 - Apache is configured to offer TRACE method (security) 1765660 - CVE-2017-18635 novnc: XSS vulnerability via the messages propagated to the status field 1767319 - [RFE] forbid updating mac pool that contains ranges overlapping with any mac range in the system 1767483 - CVE-2019-10086 apache-commons-beanutils: does not suppresses the class property in PropertyUtilsBean by default 1768707 - Cannot set or update iscsi portal group tag when editing storage connection via API 1768844 - RHEL Advanced virtualization module streams support 1769463 - [Scale] Slow performance for api/clusters when many networks devices are present 1770237 - Cannot assign a vNIC profile for VM instance profile. 1771793 - VM Portal crashes in what appears to be a permission related problem. 1773313 - RHV Metric store installation fails with error: "You need to install \"jmespath\" prior to running json_query filter" 1777954 - VM Templates greater then 101 quantity are not listed/reported in RHV-M Webadmin UI. 1779580 - drop rhvm-doc package 1781001 - CVE-2019-19336 ovirt-engine: response_type parameter allows reflected XSS 1782236 - Windows Update (the drivers) enablement 1782279 - Warning message for low space is not received on Imported Storage domain 1782882 - qemu-kvm: kvm_init_vcpu failed: Function not implemented 1784049 - Rhel6 guest with cluster default q35 chipset causes kernel panic 1784385 - Still requiring rhvm-doc in rhvm-setup-plugins 1785750 - [RFE] Ability to change default VM action (Suspend) in the VM Portal. 1788424 - Importing a VM having direct LUN attached using virtio driver is failing with error "VirtIO-SCSI is disabled for the VM" 1796809 - Build apache-sshd for RHV 4.4 RHEL 8 1796811 - Remove bundled apache-sshd library 1796815 - Build snmp4j for RHV 4.4 RHEL 8 1796817 - Remove bundled snmp4j library 1797316 - Snapshot creation from VM fails on second snapshot and afterwords 1797500 - Add disk operation failed to complete. 1798114 - Build apache-commons-digester for RHV 4.4 RHEL 8 1798117 - Build apache-commons-configuration for RHV 4.4 RHEL 8 1798120 - Build apache-commons-jexl for RHV 4.4 RHEL 8 1798127 - Build apache-commons-collections4 for RHV 4.4 RHEL 8 1798137 - Build apache-commons-vfs for RHV 4.4 RHEL 8 1799171 - Build ws-commons-util for RHV 4.4 RHEL 8 1799204 - Build xmlrpc for RHV 4.4 RHEL 8 1801149 - CVE-2019-13990 libquartz: XXE attacks via job description 1801709 - Disable activation of the host while Enroll certificate flow is still in progress 1803597 - rhv-image-discrepancies should skip storage domains in maintenance mode and ISO/Export 1805669 - change requirement on rhvm package from spice-client-msi to spice-client-win 1806276 - [HE] ovirt-provider-ovn is non-functional on 4.3.9 Hosted-Engine 1807047 - Build m2crypto for RHV 4.4 RHEL 8 1807860 - [RFE] Allow resource allocation options to be customized 1808096 - Uploading ISOs causes "Uncaught exception occurred. Please try reloading the page. Details: (TypeError) : a.n is null" 1808126 - host_service.install() does not work with deploy_hosted_engine as True. 1809040 - [CNV&RHV] let the user know that token is not valid anymore 1809052 - [CNV&RHV] ovirt-engine log file spammed by failed timers ( approx 3-5 messages/sec ) 1809875 - rhv-image-discrepancies only compares images on the last DC 1809877 - rhv-image-discrepancies sends dump-volume-chains with parameter that is ignored 1810893 - mountOptions is ignored for "import storage domain" from GUI 1811865 - [Scale] Host Monitoring generates excessive amount of qos related sql queries 1811869 - [Scale] Webadmin\REST for host interface list response time is too long because of excessive amount of qos related sql queries 1812875 - Unable to create VMs when french Language is selected for the rhvm gui. 1813305 - Engine updating SLA policies of VMs continuously in an environment which is not having any QOS configured 1813344 - CVE-2020-7598 nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or proto payload 1814197 - [CNV&RHV] when provider is remover DC is left behind and active 1814215 - [CNV&RHV] Adding new provider to engine fails after succesfull test 1816017 - Build log4j12 for RHV 4.4 EL8 1816643 - [CNV&RHV] VM created in CNV not visible in RHV 1816654 - [CNV&RHV] adding provider with already created vm failed 1816693 - [CNV&RHV] CNV VM failed to restart even if 1st dialog looks fine 1816739 - [CNV&RHV] CNV VM updated form CNV side doesn't update vm properties over on RHV side 1817467 - [Tracking] Migration path between RHV 4.3 and 4.4 1818745 - rhv-log-collector-analyzer 0.2.17 still requires pyhton2 1819201 - [CodeChange][i18n] oVirt 4.4 rhv branding - translation update 1819248 - Cannot upgrade host after engine setup 1819514 - Failed to register 4.4 host to the latest engine (4.4.0-0.29.master.el8ev) 1819960 - NPE on ImportVmTemplateFromConfigurationCommand when creating VM from ovf_data 1820621 - Build apache-commons-compress for RHV 4.4 EL8 1820638 - Build apache-commons-jxpath for RHV 4.4 EL8 1821164 - Failed snapshot creation can cause data corruption of other VMs 1821930 - Enable only TLSv1.2+ protocol for SPICE on EL7 hosts 1824095 - VM portal shows only error 1825793 - RHV branding is missing after upgrade from 4.3 1826248 - [4.4][ovirt-cockpit-sso] Compatibility issues with python3 1826437 - The console client resources page return HTTP code 500 1826801 - [CNV&RHV] update of memory on cnv side does not propagate to rhv 1826855 - [cnv&rhv] update of cpu on cnv side causing expetion in engine.log 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1828669 - After SPM select the engine lost communication to all hosts until restarted [improved logging] 1828736 - [CNV&RHV] cnv template is not propagated to rhv 1829189 - engine-setup httpd ssl configuration conflicts with Red Hat Insights 1829656 - Failed to register 4.3 host to 4.4 engine with 4.3 cluster (4.4.0-0.33.master.el8ev) 1829830 - vhost custom properties does not accept '-' 1832161 - rhv-log-collector-analyzer fails with UnicodeDecodeError on RHEL8 1834523 - Edit VM -> Enable Smartcard sharing does not stick when VM is running 1838493 - Live snapshot made with freeze in the engine will cause the FS to be frozen 1841495 - Upgrade openstack-java-sdk to 3.2.9 1842495 - high cpu usage after entering wrong search pattern in RHVM 1844270 - [vGPU] nodisplay option for mdev broken since mdev scheduling unit 1844855 - Missing images (favicon.ico, banner logo) and missing brand.css file on VM portal d/s installation 1845473 - Exporting an OVA file from a VM results in its ovf file having a format of RAW when the disk is COW 1847420 - CVE-2020-10775 ovirt-engine: Redirect to arbitrary URL allows for phishing 1850004 - CVE-2020-11023 jQuery: passing HTML containing elements to manipulation methods could result in untrusted code execution 1853444 - [CodeChange][i18n] oVirt 4.4 rhv branding - translation update (July-2020) 1854563 - [4.4 downstream only][RFE] Include a link to grafana on front page
- Package List:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:
Source: ansible-runner-1.4.5-1.el8ar.src.rpm ansible-runner-service-1.0.2-1.el8ev.src.rpm apache-commons-collections4-4.4-1.el8ev.src.rpm apache-commons-compress-1.18-1.el8ev.src.rpm apache-commons-configuration-1.10-1.el8ev.src.rpm apache-commons-jexl-2.1.1-1.el8ev.src.rpm apache-commons-jxpath-1.3-29.el8ev.src.rpm apache-commons-vfs-2.4.1-1.el8ev.src.rpm apache-sshd-2.5.1-1.el8ev.src.rpm ebay-cors-filter-1.0.1-4.el8ev.src.rpm ed25519-java-0.3.0-1.el8ev.src.rpm engine-db-query-1.6.1-1.el8ev.src.rpm java-client-kubevirt-0.5.0-1.el8ev.src.rpm log4j12-1.2.17-22.el8ev.src.rpm m2crypto-0.35.2-5.el8ev.src.rpm makeself-2.4.0-4.el8ev.src.rpm novnc-1.1.0-1.el8ost.src.rpm openstack-java-sdk-3.2.9-1.el8ev.src.rpm ovirt-cockpit-sso-0.1.4-1.el8ev.src.rpm ovirt-engine-4.4.1.8-0.7.el8ev.src.rpm ovirt-engine-api-explorer-0.0.6-1.el8ev.src.rpm ovirt-engine-dwh-4.4.1.2-1.el8ev.src.rpm ovirt-engine-extension-aaa-jdbc-1.2.0-1.el8ev.src.rpm ovirt-engine-extension-aaa-ldap-1.4.0-1.el8ev.src.rpm ovirt-engine-extension-aaa-misc-1.1.0-1.el8ev.src.rpm ovirt-engine-extension-logger-log4j-1.1.0-1.el8ev.src.rpm ovirt-engine-extensions-api-1.0.1-1.el8ev.src.rpm ovirt-engine-metrics-1.4.1.1-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.2.2-1.el8ev.src.rpm ovirt-fast-forward-upgrade-1.1.6-0.el8ev.src.rpm ovirt-log-collector-4.4.2-1.el8ev.src.rpm ovirt-scheduler-proxy-0.1.9-1.el8ev.src.rpm ovirt-web-ui-1.6.3-1.el8ev.src.rpm python-aniso8601-0.82-4.el8ost.src.rpm python-flask-1.0.2-2.el8ost.src.rpm python-flask-restful-0.3.6-8.el8ost.src.rpm python-netaddr-0.7.19-8.1.el8ost.src.rpm python-notario-0.0.16-2.el8cp.src.rpm python-ovsdbapp-0.17.1-0.20191216120142.206cf14.el8ost.src.rpm python-pbr-5.1.2-2.el8ost.src.rpm python-six-1.12.0-1.el8ost.src.rpm python-websocket-client-0.54.0-1.el8ost.src.rpm python-werkzeug-0.16.0-1.el8ost.src.rpm rhv-log-collector-analyzer-1.0.2-1.el8ev.src.rpm rhvm-branding-rhv-4.4.4-1.el8ev.src.rpm rhvm-dependencies-4.4.0-1.el8ev.src.rpm rhvm-setup-plugins-4.4.2-1.el8ev.src.rpm snmp4j-2.4.1-1.el8ev.src.rpm unboundid-ldapsdk-4.0.14-1.el8ev.src.rpm vdsm-jsonrpc-java-1.5.4-1.el8ev.src.rpm ws-commons-util-1.0.2-1.el8ev.src.rpm xmlrpc-3.1.3-1.el8ev.src.rpm
noarch: ansible-runner-1.4.5-1.el8ar.noarch.rpm ansible-runner-service-1.0.2-1.el8ev.noarch.rpm apache-commons-collections4-4.4-1.el8ev.noarch.rpm apache-commons-collections4-javadoc-4.4-1.el8ev.noarch.rpm apache-commons-compress-1.18-1.el8ev.noarch.rpm apache-commons-compress-javadoc-1.18-1.el8ev.noarch.rpm apache-commons-configuration-1.10-1.el8ev.noarch.rpm apache-commons-jexl-2.1.1-1.el8ev.noarch.rpm apache-commons-jexl-javadoc-2.1.1-1.el8ev.noarch.rpm apache-commons-jxpath-1.3-29.el8ev.noarch.rpm apache-commons-jxpath-javadoc-1.3-29.el8ev.noarch.rpm apache-commons-vfs-2.4.1-1.el8ev.noarch.rpm apache-commons-vfs-ant-2.4.1-1.el8ev.noarch.rpm apache-commons-vfs-examples-2.4.1-1.el8ev.noarch.rpm apache-commons-vfs-javadoc-2.4.1-1.el8ev.noarch.rpm apache-sshd-2.5.1-1.el8ev.noarch.rpm apache-sshd-javadoc-2.5.1-1.el8ev.noarch.rpm ebay-cors-filter-1.0.1-4.el8ev.noarch.rpm ed25519-java-0.3.0-1.el8ev.noarch.rpm ed25519-java-javadoc-0.3.0-1.el8ev.noarch.rpm engine-db-query-1.6.1-1.el8ev.noarch.rpm java-client-kubevirt-0.5.0-1.el8ev.noarch.rpm log4j12-1.2.17-22.el8ev.noarch.rpm log4j12-javadoc-1.2.17-22.el8ev.noarch.rpm makeself-2.4.0-4.el8ev.noarch.rpm novnc-1.1.0-1.el8ost.noarch.rpm openstack-java-ceilometer-client-3.2.9-1.el8ev.noarch.rpm openstack-java-ceilometer-model-3.2.9-1.el8ev.noarch.rpm openstack-java-cinder-client-3.2.9-1.el8ev.noarch.rpm openstack-java-cinder-model-3.2.9-1.el8ev.noarch.rpm openstack-java-client-3.2.9-1.el8ev.noarch.rpm openstack-java-glance-client-3.2.9-1.el8ev.noarch.rpm openstack-java-glance-model-3.2.9-1.el8ev.noarch.rpm openstack-java-heat-client-3.2.9-1.el8ev.noarch.rpm openstack-java-heat-model-3.2.9-1.el8ev.noarch.rpm openstack-java-javadoc-3.2.9-1.el8ev.noarch.rpm openstack-java-keystone-client-3.2.9-1.el8ev.noarch.rpm openstack-java-keystone-model-3.2.9-1.el8ev.noarch.rpm openstack-java-nova-client-3.2.9-1.el8ev.noarch.rpm openstack-java-nova-model-3.2.9-1.el8ev.noarch.rpm openstack-java-quantum-client-3.2.9-1.el8ev.noarch.rpm openstack-java-quantum-model-3.2.9-1.el8ev.noarch.rpm openstack-java-resteasy-connector-3.2.9-1.el8ev.noarch.rpm openstack-java-swift-client-3.2.9-1.el8ev.noarch.rpm openstack-java-swift-model-3.2.9-1.el8ev.noarch.rpm ovirt-cockpit-sso-0.1.4-1.el8ev.noarch.rpm ovirt-engine-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-api-explorer-0.0.6-1.el8ev.noarch.rpm ovirt-engine-backend-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-dbscripts-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-dwh-4.4.1.2-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.4.1.2-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.4.1.2-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-jdbc-1.2.0-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-1.4.0-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-setup-1.4.0-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-misc-1.1.0-1.el8ev.noarch.rpm ovirt-engine-extension-logger-log4j-1.1.0-1.el8ev.noarch.rpm ovirt-engine-extensions-api-1.0.1-1.el8ev.noarch.rpm ovirt-engine-extensions-api-javadoc-1.0.1-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-metrics-1.4.1.1-1.el8ev.noarch.rpm ovirt-engine-restapi-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-base-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-tools-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-tools-backup-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.2.2-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.4.1.8-0.7.el8ev.noarch.rpm ovirt-fast-forward-upgrade-1.1.6-0.el8ev.noarch.rpm ovirt-log-collector-4.4.2-1.el8ev.noarch.rpm ovirt-scheduler-proxy-0.1.9-1.el8ev.noarch.rpm ovirt-web-ui-1.6.3-1.el8ev.noarch.rpm python-flask-doc-1.0.2-2.el8ost.noarch.rpm python2-netaddr-0.7.19-8.1.el8ost.noarch.rpm python2-pbr-5.1.2-2.el8ost.noarch.rpm python2-six-1.12.0-1.el8ost.noarch.rpm python3-aniso8601-0.82-4.el8ost.noarch.rpm python3-ansible-runner-1.4.5-1.el8ar.noarch.rpm python3-flask-1.0.2-2.el8ost.noarch.rpm python3-flask-restful-0.3.6-8.el8ost.noarch.rpm python3-netaddr-0.7.19-8.1.el8ost.noarch.rpm python3-notario-0.0.16-2.el8cp.noarch.rpm python3-ovirt-engine-lib-4.4.1.8-0.7.el8ev.noarch.rpm python3-ovsdbapp-0.17.1-0.20191216120142.206cf14.el8ost.noarch.rpm python3-pbr-5.1.2-2.el8ost.noarch.rpm python3-six-1.12.0-1.el8ost.noarch.rpm python3-websocket-client-0.54.0-1.el8ost.noarch.rpm python3-werkzeug-0.16.0-1.el8ost.noarch.rpm python3-werkzeug-doc-0.16.0-1.el8ost.noarch.rpm rhv-log-collector-analyzer-1.0.2-1.el8ev.noarch.rpm rhvm-4.4.1.8-0.7.el8ev.noarch.rpm rhvm-branding-rhv-4.4.4-1.el8ev.noarch.rpm rhvm-dependencies-4.4.0-1.el8ev.noarch.rpm rhvm-setup-plugins-4.4.2-1.el8ev.noarch.rpm snmp4j-2.4.1-1.el8ev.noarch.rpm snmp4j-javadoc-2.4.1-1.el8ev.noarch.rpm unboundid-ldapsdk-4.0.14-1.el8ev.noarch.rpm unboundid-ldapsdk-javadoc-4.0.14-1.el8ev.noarch.rpm vdsm-jsonrpc-java-1.5.4-1.el8ev.noarch.rpm ws-commons-util-1.0.2-1.el8ev.noarch.rpm ws-commons-util-javadoc-1.0.2-1.el8ev.noarch.rpm xmlrpc-client-3.1.3-1.el8ev.noarch.rpm xmlrpc-common-3.1.3-1.el8ev.noarch.rpm xmlrpc-javadoc-3.1.3-1.el8ev.noarch.rpm xmlrpc-server-3.1.3-1.el8ev.noarch.rpm
x86_64: m2crypto-debugsource-0.35.2-5.el8ev.x86_64.rpm python3-m2crypto-0.35.2-5.el8ev.x86_64.rpm python3-m2crypto-debuginfo-0.35.2-5.el8ev.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2017-18635 https://access.redhat.com/security/cve/CVE-2019-8331 https://access.redhat.com/security/cve/CVE-2019-10086 https://access.redhat.com/security/cve/CVE-2019-13990 https://access.redhat.com/security/cve/CVE-2019-17195 https://access.redhat.com/security/cve/CVE-2019-19336 https://access.redhat.com/security/cve/CVE-2020-7598 https://access.redhat.com/security/cve/CVE-2020-10775 https://access.redhat.com/security/cve/CVE-2020-11022 https://access.redhat.com/security/cve/CVE-2020-11023 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/technical_notes
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXylir9zjgjWX9erEAQii/A//bJm3u0+ul+LdQwttSJJ79OdVqcp3FktP tdPj8AFbB6F9KkuX9FAQja0/2pgZAldB3Eyz57GYTxyDD1qeMqYSayGHCH01GWAn u8uF90lcSz6YvgEPDh1mWhLYQMfdWT6IUuKOEHldt8TyHbc7dX3xCbsLDzNCxGbl QuPSFPQBJaAXETSw42NGzdUzaM9zoQ0Mngj+Owcgw53YyBy3BSLAb5bKuijvkcLy SVCAxxiQ89E+cnETKYIv4dOfqXGA5wLg68hDmUQyFcXHA9nQbJM9Q0s1fbZ2Wav1 oGGTqJDTgVElxrHB5pYJ6pu484ZgJealkBCrHA2OBsMJUadwitVvQLXFZF5OyN0N f/vtZ1ua4mZADa61qfnlmVRiyISwmPPWIOImA3TIE5Q8Yl5ucCqtDjQPoJAbXsUl Y22Bb5x7JyrN0nyOgwh6BGGK51CmOaP+xNuWD7osI24pnzdmPTZuJrZLePxgPgac WWQNznzvokknva2ofvujAm+DEl+W7W3A8Vs9wkmUWYlaVC7GFLEkcvQjjHahZ7kh dVJNoh70vpA+aJCMQHYK6MGtCSAWoqXkRTsHb3Stfm2vLLz6GYxY5OuvB7Z0ME1N zCiFjBla5+3nKx5ab8Pola56T1wRULHL6zYN9GTsOzxjdJsKHXBVeV8OYcnoHiza 2TrKn2dtZwI= =92Q3 -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
See the following documentation, which will be updated shortly for release 3.11.219, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_r elease_notes.html
This update is available via the Red Hat Network. Bugs fixed (https://bugzilla.redhat.com/):
1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method
- You can also manage user accounts for web applications, mobile applications, and RESTful web services. Description:
Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications.
Security Fix(es):
-
jquery: Prototype pollution in object's prototype leading to denial of service, remote code execution, or property injection (CVE-2019-11358)
-
jquery: Cross-site scripting via cross-domain ajax requests (CVE-2015-9251)
-
bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute (CVE-2018-14040)
-
jquery: Untrusted code execution via tag in HTML passed to DOM manipulation methods (CVE-2020-11023)
-
jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method (CVE-2020-11022)
-
bootstrap: XSS in the data-target attribute (CVE-2016-10735)
-
bootstrap: Cross-site Scripting (XSS) in the data-target property of scrollspy (CVE-2018-14041)
-
sshd-common: mina-sshd: Java unsafe deserialization vulnerability (CVE-2022-45047)
-
woodstox-core: woodstox to serialise XML data was vulnerable to Denial of Service attacks (CVE-2022-40152)
-
bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip (CVE-2018-14042)
-
bootstrap: XSS in the tooltip or popover data-template attribute (CVE-2019-8331)
-
nodejs-moment: Regular expression denial of service (CVE-2017-18214)
-
wildfly-elytron: possible timing attacks via use of unsafe comparator (CVE-2022-3143)
-
jackson-databind: use of deeply nested arrays (CVE-2022-42004)
-
jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS (CVE-2022-42003)
-
jettison: parser crash by stackoverflow (CVE-2022-40149)
-
jettison: memory exhaustion via user-supplied XML or JSON data (CVE-2022-40150)
-
jettison: If the value in map is the map's self, the new new JSONObject(map) cause StackOverflowError which may lead to dos (CVE-2022-45693)
-
CXF: Apache CXF: SSRF Vulnerability (CVE-2022-46364)
-
JIRA issues fixed (https://issues.jboss.org/):
JBEAP-23864 - (7.4.z) Upgrade xmlsec from 2.1.7.redhat-00001 to 2.2.3.redhat-00001 JBEAP-23865 - GSS Upgrade Apache CXF from 3.3.13.redhat-00001 to 3.4.10.redhat-00001 JBEAP-23866 - (7.4.z) Upgrade wss4j from 2.2.7.redhat-00001 to 2.3.3.redhat-00001 JBEAP-23928 - Tracker bug for the EAP 7.4.9 release for RHEL-9 JBEAP-24055 - (7.4.z) Upgrade HAL from 3.3.15.Final-redhat-00001 to 3.3.16.Final-redhat-00001 JBEAP-24081 - (7.4.z) Upgrade Elytron from 1.15.14.Final-redhat-00001 to 1.15.15.Final-redhat-00001 JBEAP-24095 - (7.4.z) Upgrade elytron-web from 1.9.2.Final-redhat-00001 to 1.9.3.Final-redhat-00001 JBEAP-24100 - GSS Upgrade Undertow from 2.2.20.SP1-redhat-00001 to 2.2.22.SP3-redhat-00001 JBEAP-24127 - (7.4.z) UNDERTOW-2123 - Update AsyncContextImpl.dispatch to use proper value JBEAP-24128 - (7.4.z) Upgrade Hibernate Search from 5.10.7.Final-redhat-00001 to 5.10.13.Final-redhat-00001 JBEAP-24132 - GSS Upgrade Ironjacamar from 1.5.3.SP2-redhat-00001 to 1.5.10.Final-redhat-00001 JBEAP-24147 - (7.4.z) Upgrade jboss-ejb-client from 4.0.45.Final-redhat-00001 to 4.0.49.Final-redhat-00001 JBEAP-24167 - (7.4.z) Upgrade WildFly Core from 15.0.19.Final-redhat-00001 to 15.0.21.Final-redhat-00002 JBEAP-24191 - GSS Upgrade remoting from 5.0.26.SP1-redhat-00001 to 5.0.27.Final-redhat-00001 JBEAP-24195 - GSS Upgrade JSF API from 3.0.0.SP06-redhat-00001 to 3.0.0.SP07-redhat-00001 JBEAP-24207 - (7.4.z) Upgrade Soteria from 1.0.1.redhat-00002 to 1.0.1.redhat-00003 JBEAP-24248 - (7.4.z) ELY-2492 - Upgrade sshd-common in Elytron from 2.7.0 to 2.9.2 JBEAP-24426 - (7.4.z) Upgrade Elytron from 1.15.15.Final-redhat-00001 to 1.15.16.Final-redhat-00001 JBEAP-24427 - (7.4.z) Upgrade WildFly Core from 15.0.21.Final-redhat-00002 to 15.0.22.Final-redhat-00001
7
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202004-2191",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "insurance data foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6-8.1.0"
},
{
"model": "financial services institutional performance analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.2.1"
},
{
"model": "communications eagle application processor",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.1.0"
},
{
"model": "financial services asset liability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "financial services loan loss forecasting and provisioning",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.2.0"
},
{
"model": "financial services funds transfer pricing",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "financial services analytical applications infrastructure",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6.0.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"model": "max data",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.2"
},
{
"model": "retail back office",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.0"
},
{
"model": "policy automation",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.20"
},
{
"model": "financial services data foundation",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "financial services asset liability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "financial services data foundation",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "oncommand system manager",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.1.3"
},
{
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.6"
},
{
"model": "financial services market risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "communications application session controller",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.8m0"
},
{
"model": "financial services loan loss forecasting and provisioning",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "financial services data integration hub",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "jdeveloper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"model": "communications diameter signaling router idih\\:",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.2"
},
{
"model": "financial services funds transfer pricing",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "banking digital experience",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"model": "jquery",
"scope": "gte",
"trust": 1.0,
"vendor": "jquery",
"version": "1.2"
},
{
"model": "hospitality materials control",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"model": "snap creator framework",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services institutional performance analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "insurance insbridge rating and underwriting",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "5.6.1.0"
},
{
"model": "siebel ui framework",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "20.8"
},
{
"model": "policy automation for mobile devices",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.20"
},
{
"model": "snapcenter",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "7.70"
},
{
"model": "financial services analytical applications reconciliation framework",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "jquery",
"scope": "lt",
"trust": 1.0,
"vendor": "jquery",
"version": "3.5.0"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.3.0.0"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.2"
},
{
"model": "communications eagle application processor",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.4.0"
},
{
"model": "financial services profitability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "insurance insbridge rating and underwriting",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.0.0.0"
},
{
"model": "financial services asset liability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "enterprise manager ops center",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.4.0.0"
},
{
"model": "hospitality simphony",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"model": "banking digital experience",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "20.1"
},
{
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"model": "retail customer management and segmentation foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.0"
},
{
"model": "insurance insbridge rating and underwriting",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.6.0.0"
},
{
"model": "financial services analytical applications reconciliation framework",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services basel regulatory capital basic",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "retail returns management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1"
},
{
"model": "financial services analytical applications infrastructure",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "10.3.6.0.0"
},
{
"model": "financial services hedge management and ifrs valuations",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "blockchain platform",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "21.1.2"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.0"
},
{
"model": "policy automation for mobile devices",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.0"
},
{
"model": "retail returns management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.0"
},
{
"model": "communications services gatekeeper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.0"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.56"
},
{
"model": "retail back office",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1"
},
{
"model": "log correlation engine",
"scope": "lt",
"trust": 1.0,
"vendor": "tenable",
"version": "6.0.9"
},
{
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "20.1"
},
{
"model": "financial services data governance for us regulatory reporting",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.9"
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hospitality simphony",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.2"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1.1.0.0"
},
{
"model": "agile product lifecycle management for process",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "6.2.0.0"
},
{
"model": "financial services liquidity risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "insurance allocation manager for enterprise profitability",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "financial services basel regulatory capital basic",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.3.0"
},
{
"model": "financial services analytical applications reconciliation framework",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "financial services price creation and discovery",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "financial services profitability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services data integration hub",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"model": "communications diameter signaling router idih\\:",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.2"
},
{
"model": "financial services institutional performance analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "financial services basel regulatory capital internal ratings based approach",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "financial services balance sheet planning",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.0"
},
{
"model": "financial services liquidity risk management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "financial services market risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "hospitality simphony",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1.2"
},
{
"model": "insurance accounting analyzer",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.9"
},
{
"model": "financial services hedge management and ifrs valuations",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "agile product supplier collaboration for process",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "6.2.0.0"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services basel regulatory capital basic",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "financial services regulatory reporting for us federal reserve",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.9"
},
{
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.3"
},
{
"model": "financial services regulatory reporting for european banking authority",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"model": "hospitality simphony",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1.0-19.1.2"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services analytical applications infrastructure",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0.0.0"
},
{
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.14"
},
{
"model": "financial services profitability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "financial services funds transfer pricing",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "financial services liquidity risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "financial services regulatory reporting for european banking authority",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "hospitality simphony",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1.0"
},
{
"model": "jdeveloper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.1.1.9.0"
},
{
"model": "financial services basel regulatory capital internal ratings based approach",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "policy automation connector for siebel",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "10.4.6"
},
{
"model": "financial services data governance for us regulatory reporting",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "oncommand system manager",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.0"
},
{
"model": "financial services hedge management and ifrs valuations",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "storagetek acsls",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.5.1"
},
{
"model": "communications webrtc session controller",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.2"
},
{
"model": "policy automation",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.0"
},
{
"model": "financial services loan loss forecasting and provisioning",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "financial services analytical applications infrastructure",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "jdeveloper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"model": "communications billing and revenue management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.5.0.23.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "31"
},
{
"model": "communications billing and revenue management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.0.0.3.0"
},
{
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.1.1"
},
{
"model": "insurance data foundation",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "insurance data foundation",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "application testing suite",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.3.0.1"
},
{
"model": "financial services liquidity risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "financial services price creation and discovery",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "enterprise session border controller",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.4"
},
{
"model": "financial services regulatory reporting for us federal reserve",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"model": "insurance allocation manager for enterprise profitability",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"model": "financial services data integration hub",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "7.0"
},
{
"model": "oncommand insight",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services basel regulatory capital internal ratings based approach",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:jquery:jquery:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.5.0",
"versionStartIncluding": "1.2",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:drupal:drupal:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.70",
"versionStartIncluding": "7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:drupal:drupal:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.7.14",
"versionStartIncluding": "8.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:drupal:drupal:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.8.6",
"versionStartIncluding": "8.8.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.1.3.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jdeveloper:11.1.1.9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_back_office:14.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_back_office:14.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.56:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:10.3.6.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_webrtc_session_controller:7.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:agile_product_lifecycle_management_for_process:6.2.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:application_testing_suite:13.3.0.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_returns_management:14.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_returns_management:14.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jdeveloper:12.2.1.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:policy_automation_connector_for_siebel:10.4.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_market_risk_measurement_and_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_materials_control:18.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:18.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:18.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:19.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:18.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_hedge_management_and_ifrs_valuations:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_loan_loss_forecasting_and_provisioning:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_asset_liability_management:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_asset_liability_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_profitability_management:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_profitability_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_funds_transfer_pricing:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_funds_transfer_pricing:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_price_creation_and_discovery:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_measurement_and_management:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_measurement_and_management:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_balance_sheet_planning:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:14.1.1.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_analytical_applications_infrastructure:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0.0.0",
"versionStartIncluding": "8.0.6.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_customer_management_and_segmentation_foundation:19.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_billing_and_revenue_management:12.0.0.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_billing_and_revenue_management:7.5.0.23.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_governance_for_us_regulatory_reporting:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.9",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_simphony:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "19.1.2",
"versionStartIncluding": "19.1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:19.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_internal_ratings_based_approach:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_data_foundation:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_price_creation_and_discovery:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_profitability_management:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:20.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:policy_automation:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "12.2.20",
"versionStartIncluding": "12.2.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_analytical_applications_reconciliation_framework:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_loan_loss_forecasting_and_provisioning:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_internal_ratings_based_approach:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:siebel_ui_framework:20.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_application_session_controller:3.8m0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_institutional_performance_analytics:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router_idih\\::*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.2.2",
"versionStartIncluding": "8.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_institutional_performance_analytics:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_foundation:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_insbridge_rating_and_underwriting:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "5.6.0.0",
"versionStartIncluding": "5.0.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_measurement_and_management:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_institutional_performance_analytics:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_basic:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_regulatory_reporting_for_us_federal_reserve:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.9",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_regulatory_reporting_for_european_banking_authority:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:policy_automation_for_mobile_devices:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "12.2.20",
"versionStartIncluding": "12.2.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_allocation_manager_for_enterprise_profitability:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_insbridge_rating_and_underwriting:5.6.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_simphony:18.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_integration_hub:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_integration_hub:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_accounting_analyzer:8.0.9:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_basic:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_hedge_management_and_ifrs_valuations:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_analytical_applications_reconciliation_framework:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_allocation_manager_for_enterprise_profitability:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_simphony:18.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_asset_liability_management:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_market_risk_measurement_and_management:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jdeveloper:12.2.1.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_funds_transfer_pricing:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_integration_hub:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_services_gatekeeper:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_eagle_application_processor:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "16.4.0",
"versionStartIncluding": "16.1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:blockchain_platform:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "21.1.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:storagetek_acsls:8.5.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:snap_creator_framework:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_system_manager:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "3.1.3",
"versionStartIncluding": "3.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:max_data:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:tenable:log_correlation_engine:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "6.0.9",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.1.3.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:agile_product_supplier_collaboration_for_process:6.2.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jdeveloper:11.1.1.9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_back_office:14.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_back_office:14.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.56:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:10.3.6.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_webrtc_session_controller:7.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_returns_management:14.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_returns_management:14.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jdeveloper:12.2.1.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:policy_automation_connector_for_siebel:10.4.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_market_risk_measurement_and_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_materials_control:18.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_hedge_management_and_ifrs_valuations:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_loan_loss_forecasting_and_provisioning:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_asset_liability_management:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_asset_liability_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_profitability_management:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_profitability_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_funds_transfer_pricing:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_funds_transfer_pricing:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_price_creation_and_discovery:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_management:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_measurement_and_management:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_measurement_and_management:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_balance_sheet_planning:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_analytical_applications_infrastructure:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:14.1.1.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:retail_customer_management_and_segmentation_foundation:19.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_foundation:7.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_billing_and_revenue_management:12.0.0.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_billing_and_revenue_management:7.5.0.23.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_governance_for_us_regulatory_reporting:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.9",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_internal_ratings_based_approach:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_price_creation_and_discovery:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_profitability_management:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:policy_automation:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "12.2.20",
"versionStartIncluding": "12.2.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_analytical_applications_reconciliation_framework:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_loan_loss_forecasting_and_provisioning:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_internal_ratings_based_approach:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:siebel_ui_framework:20.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_application_session_controller:3.8m0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_institutional_performance_analytics:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router_idih\\::*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.2.2",
"versionStartIncluding": "8.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_institutional_performance_analytics:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_foundation:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_insbridge_rating_and_underwriting:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "5.6.0.0",
"versionStartIncluding": "5.0.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_liquidity_risk_measurement_and_management:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_institutional_performance_analytics:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_basic:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_regulatory_reporting_for_us_federal_reserve:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.9",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_regulatory_reporting_for_european_banking_authority:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.1.0",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:policy_automation_for_mobile_devices:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "12.2.20",
"versionStartIncluding": "12.2.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_allocation_manager_for_enterprise_profitability:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_insbridge_rating_and_underwriting:5.6.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_simphony:18.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_integration_hub:8.0.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_integration_hub:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_accounting_analyzer:8.0.9:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_basel_regulatory_capital_basic:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.8",
"versionStartIncluding": "8.0.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_hedge_management_and_ifrs_valuations:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_analytical_applications_reconciliation_framework:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_allocation_manager_for_enterprise_profitability:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_simphony:18.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_asset_liability_management:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_market_risk_measurement_and_management:8.0.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jdeveloper:12.2.1.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_funds_transfer_pricing:8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_data_integration_hub:8.0.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hospitality_simphony:19.1.0-19.1.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:insurance_data_foundation:8.0.6-8.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_digital_experience:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "20.1",
"versionStartIncluding": "18.1",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "161727"
},
{
"db": "PACKETSTORM",
"id": "158750"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159727"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170819"
}
],
"trust": 0.7
},
"cve": "CVE-2020-11022",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": true,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-163559",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 6.1,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 2.8,
"impactScore": 2.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
"version": "3.1"
},
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "security-advisories@github.com",
"availabilityImpact": "NONE",
"baseScore": 6.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.6,
"impactScore": 4.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:L/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2020-11022",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "security-advisories@github.com",
"id": "CVE-2020-11022",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202004-2429",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-163559",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. Solution:\n\nFor information on upgrading Ansible Tower, reference the Ansible Tower\nUpgrade and Migration Guide:\nhttps://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/\nindex.html\n\n4. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update\nAdvisory ID: RHSA-2020:3247-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:3247\nIssue date: 2020-08-04\nCVE Names: CVE-2017-18635 CVE-2019-8331 CVE-2019-10086 \n CVE-2019-13990 CVE-2019-17195 CVE-2019-19336 \n CVE-2020-7598 CVE-2020-10775 CVE-2020-11022 \n CVE-2020-11023 \n=====================================================================\n\n1. Summary:\n\nUpdated ovirt-engine packages that fix several bugs and add various\nenhancements are now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch, x86_64\n\n3. Description:\n\nThe ovirt-engine package provides the Red Hat Virtualization Manager, a\ncentralized management platform that allows system administrators to view\nand manage virtual machines. The Manager provides a comprehensive range of\nfeatures including search capabilities, resource management, live\nmigrations, and virtual infrastructure provisioning. \n\nThe Manager is a JBoss Application Server application that provides several\ninterfaces through which the virtual environment can be accessed and\ninteracted with, including an Administration Portal, a VM Portal, and a\nRepresentational State Transfer (REST) Application Programming Interface\n(API). \n\nA list of bugs fixed in this update is available in the Technical Notes\nbook:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht\nml-single/technical_notes\n\nSecurity Fix(es):\n\n* apache-commons-beanutils: does not suppresses the class property in\nPropertyUtilsBean by default (CVE-2019-10086)\n\n* libquartz: XXE attacks via job description (CVE-2019-13990)\n\n* novnc: XSS vulnerability via the messages propagated to the status field\n(CVE-2017-18635)\n\n* bootstrap: XSS in the tooltip or popover data-template attribute\n(CVE-2019-8331)\n\n* nimbus-jose-jwt: Uncaught exceptions while parsing a JWT (CVE-2019-17195)\n\n* ovirt-engine: response_type parameter allows reflected XSS\n(CVE-2019-19336)\n\n* nodejs-minimist: prototype pollution allows adding or modifying\nproperties of Object.prototype using a constructor or __proto__ payload\n(CVE-2020-7598)\n\n* ovirt-engine: Redirect to arbitrary URL allows for phishing\n(CVE-2020-10775)\n\n* Cross-site scripting due to improper injQuery.htmlPrefilter method\n(CVE-2020-11022)\n\n* jQuery: passing HTML containing \u003coption\u003e elements to manipulation methods\ncould result in untrusted code execution (CVE-2020-11023)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1080097 - [RFE] Allow editing disks details in the Disks tab\n1325468 - [RFE] Autostart of VMs that are down (with Engine assistance - Engine has to be up)\n1358501 - [RFE] multihost network change - notify when done\n1427717 - [RFE] Create and/or select affinity group upon VM creation. \n1475774 - RHV-M requesting four GetDeviceListVDSCommand when editing storage domain\n1507438 - not able to deploy new rhvh host when \"/tmp\" is mounted with \"noexec\" option\n1523835 - Hosted-Engine: memory hotplug does not work for engine vm\n1527843 - [Tracker] Q35 chipset support (with seabios)\n1529042 - [RFE] Changing of Cluster CPU Type does not trigger config update notification\n1535796 - Undeployment of HE is not graceful\n1546838 - [RFE] Refuse to deploy on localhost.localdomain\n1547937 - [RFE] Live Storage Migration progress bar. \n1585986 - [HE] When lowering the cluster compatibility, we need to force update the HE storage OVF store to ensure it can start up (migration will not work). \n1593800 - [RFE] forbid new mac pools with overlapping ranges\n1596178 - inconsistent display between automatic and manual Pool Type\n1600059 - [RFE] Add by default a storage lease to HA VMs\n1610212 - After updating to RHV 4.1 while trying to edit the disk, getting error \"Cannot edit Virtual Disk. Cannot edit Virtual Disk. Disk extension combined with disk compat version update isn\u0027t supported. Please perform the updates separately.\"\n1611395 - Unable to list Compute Templates in RHV 4.2 from Satellite 6.3.2\n1616451 - [UI] add a tooltip to explain the supported matrix for the combination of disk allocation policies, formats and the combination result\n1637172 - Live Merge hung in the volume deletion phase, leaving snapshot in a LOCKED state\n1640908 - Javascript Error popup when Managing StorageDomain with LUNs and 400+ paths\n1642273 - [UI] - left nav border highlight missing in RHV\n1647440 - [RFE][UI] Provide information about the VM next run\n1648345 - Jobs are not properly cleaned after a failed task. \n1650417 - HA is broken for VMs having disks in NFS storage domain because of Qemu OFD locking\n1650505 - Increase of ClusterCompatibilityVersion to Cluster with virtual machines with outstanding configuration changes, those changes will be reverted\n1651406 - [RFE] Allow Maintenance of Host with Enforcing VM Affinity Rules (hard affinity)\n1651939 - a new size of the direct LUN not updated in Admin Portal\n1654069 - [Downstream Clone] [UI] - grids bottom scrollbar hides bottom row\n1654889 - [RFE] Support console VNC for mediated devices\n1656621 - Importing VM OVA always enables \u0027Cloud-Init/Sysprep\u0027\n1658101 - [RESTAPI] Adding ISO disables serial console\n1659161 - Unable to edit pool that is delete protected\n1660071 - Regression in Migration of VM that starts in pause mode: took 11 hours\n1660644 - Concurrent LSMs of the same disk can be issued via the REST-API\n1663366 - USB selection option disabled even though USB support is enabled in RHV-4.2\n1664479 - Third VM fails to get migrated when host is placed into maintenance mode\n1666913 - [UI] warn users about different \"Vdsm Name\" when creating network with a fancy char or long name\n1670102 - [CinderLib] - openstack-cinder and cinderlib packages are not installed on ovirt-engine machine\n1671876 - \"Bond Active Slave\" parameter on RHV-M GUI shows an incorrect until Refresh Caps\n1679039 - Unable to upload image through Storage-\u003eDomain-\u003eDisk because of wrong DC\n1679110 - [RFE] change Admin Portal toast notifications location\n1679471 - [ja, de, es, fr, pt_BR] The console client resources page shows truncated title for some locales\n1679730 - Warn about host IP addresses outside range\n1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute\n1686650 - Memory snapshots\u0027 deletion logging unnecessary WARNINGS in engine.log\n1687345 - Snapshot with memory volumes can fail if the memory dump takes more than 180 seconds\n1690026 - [RFE] - Creating an NFS storage domain the engine should let the user specify exact NFS version v4.0 and not just v4\n1690155 - Disk migration progress bar not clearly visible and unusable. \n1690475 - When a live storage migration fails, the auto generated snapshot does not get removed\n1691562 - Cluster level changes are not increasing VMs generation numbers and so a new OVF_STORE content is not copied to the shared storage\n1692592 - \"\ufffcEnable menu to select boot device shows 10 device listed with cdrom at 10th slot but when selecting 10 option the VM took 1 as option and boot with disk\n1693628 - Engine generates too many updates to vm_dynamic table due to the session change\n1693813 - Do not change DC level if there are VMs running/paused with older CL. \n1695026 - Failure in creating snapshots during \"Live Storage Migration\" can result in a nonexistent snapshot\n1695635 - [RFE] Improve Host Drop-down menu in different Dialogs (i.e. Alphabetical sort of Hosts in Remove|New StorageDomains)\n1696245 - [RFE] Allow full customization while cloning a VM\n1696669 - Build bouncycastle for RHV 4.4 RHEL 8\n1696676 - Build ebay-cors-filter for RHV 4.4 RHEL 8\n1698009 - Build openstack-java-sdk for RHV 4.4 RHEL 8\n1698102 - Print a warning message to engine-setup, which highlights that other clusters than the Default one are not modified to use ovirt-provider-ovn as the default network provider\n1700021 - [RFE] engine-setup should warn and prompt if ca.pem is missing but other generated pki files exist\n1700036 - [RFE] Add RedFish API for host power management for RHEV\n1700319 - VM is going to pause state with \"storage I/O error\". \n1700338 - [RFE] Alternate method to configure the email Event Notifier for a user in RHV through API (instead of RHV GUI)\n1700725 - [scale] RHV-M runs out of memory due to to much data reported by the guest agent\n1700867 - Build makeself for RHV 4.4 RHEL 8\n1701476 - Build unboundid-ldapsdk for RHV 4.4 RHEL 8\n1701491 - Build RHV-M 4.4 - RHEL 8\n1701522 - Build ovirt-imageio-proxy for RHV 4.4 / RHEL 8\n1701528 - Build / Tag python-ovsdbapp for RHV 4.4 RHEL 8\n1701530 - Build / Tag ovirt-cockpit-sso for RHV 4.4 RHEL 8\n1701531 - Build / Tag ovirt-engine-api-explorer for RHV 4.4 RHEL 8\n1701533 - Build / Tag ovirt-engine-dwh for RHV 4.4 / RHEL 8\n1701538 - Build / Tag vdsm-jsonrpc-java for RHV 4.4 RHEL 8\n1701544 - Build rhvm-dependencies for RHV 4.4 RHEL 8\n1702310 - Build / Tag ovirt-engine-ui-extensions for RHV 4.4 RHEL 8\n1702312 - Build ovirt-log-collector for RHV 4.4 RHEL 8\n1703112 - PCI address of NICs are not stored in the database after a hotplug of passthrough NIC resulting in change of network device name in VM after a reboot\n1703428 - VMs migrated from KVM to RHV show warning \u0027The latest guest agent needs to be installed and running on the guest\u0027\n1707225 - [cinderlib] Cinderlib DB is missing a backup and restore option\n1708624 - Build rhvm-setup-plugins for RHV 4.4 - RHEL 8\n1710491 - No EVENT_ID is generated in /var/log/ovirt-engine/engine.log when VM is rebooted from OS level itself. \n1711006 - Metrics installation fails during the execution of playbook ovirt-metrics-store-installation if the environment is not having DHCP\n1712255 - Drop 4.1 datacenter/cluster level\n1712746 - [RFE] Ignition support for ovirt vms\n1712890 - engine-setup should check for snapshots in unsupported CL\n1714528 - Missing IDs on cluster upgrade buttons\n1714633 - Using more than one asterisk in the search string is not working when searching for users. \n1714834 - Cannot disable SCSI passthrough using API\n1715725 - Sending credentials in query string logs them in ovirt-request-logs\n1716590 - [RFE][UX] Make Cluster-wide \"Custom serial number policy\" value visible at VM level\n1718818 - [RFE] Enhance local disk passthrough\n1720686 - Tag ovirt-scheduler-proxy for RHV 4.4 RHEL 8\n1720694 - Build ovirt-engine-extension-aaa-jdbc for RHV 4.4 RHEL 8\n1720795 - New guest tools are available mark in case of guest tool located on Data Domain\n1724959 - RHV recommends reporting issues to GitHub rather than access.redhat.com (ovirt-\u003eRHV rebrand glitch?)\n1727025 - NPE in DestroyImage endAction during live merge leaving a task in DB for hours causing operations depending on host clean tasks to fail as Deactivate host/StopSPM/deactivate SD\n1728472 - Engine reports network out of sync due to ipv6 default gateway via ND RA on a non default route network. \n1729511 - engine-setup fails to upgrade to 4.3 with Unicode characters in CA subject\n1729811 - [scale] updatevmdynamic broken if too many users logged in - psql ERROR: value too long for type character varying(255)\n1730264 - VMs will fail to start if the vnic profile attached is having port mirroring enabled and have name greater than 15 characters\n1730436 - Snapshot creation was successful, but snapshot remains locked\n1731212 - RHV 4.4 landing page does not show login or allow scrolling. \n1731590 - Cannot preview snapshot, it fails and VM remains locked. \n1733031 - [RFE] Add warning when importing data domains to newer DC that may trigger SD format upgrade\n1733529 - Consume python-ovsdbapp dependencies from OSP in RHEL 8 RHV 4.4\n1733843 - Export to OVA fails if VM is running on the Host doing the export\n1734839 - Unable to start guests in our Power9 cluster without running in headless mode. \n1737234 - Attach a non-existent ISO to vm by the API return 201 and marks the Attach CD checkbox as ON\n1737684 - Engine deletes the leaf volume when SnapshotVDSCommand timed out without checking if the volume is still used by the VM\n1740978 - [RFE] Warn or Block importing VMs/Templates from unsupported compatibility levels. \n1741102 - host activation causes RHHI nodes to lose the quorum\n1741271 - Move/Copy disk are blocked if there is less space in source SD than the size of the disk\n1741625 - VM fails to be re-started with error: Failed to acquire lock: No space left on device\n1743690 - Commit and Undo buttons active when no snapshot selected\n1744557 - RHV 4.3 throws an exception when trying to access VMs which have snapshots from unsupported compatibility levels\n1745384 - [IPv6 Static] Engine should allow updating network\u0027s static ipv6gateway\n1745504 - Tag rhv-log-collector-analyzer for RHV 4.4 RHEL 8\n1746272 - [BREW BUILD ENABLER] Build the oVirt Ansible roles for RHV 4.4.0\n1746430 - [Rebase] Rebase v2v-conversion-host for RHV 4.4 Engine\n1746877 - [Metrics] Rebase bug - for the 4.4 release on EL8\n1747772 - Extra white space at the top of webadmin dialogs\n1749284 - Change the Snapshot operation to be asynchronous\n1749944 - teardownImage attempts to deactivate in-use LV\u0027s rendering the VM disk image/volumes in locked state. \n1750212 - MERGE_STATUS fails with \u0027Invalid UUID string: mapper\u0027 when Direct LUN that already exists is hot-plugged\n1750348 - [Tracking] rhvm-branding-rhv for RHV 4.4\n1750357 - [Tracking] ovirt-web-ui for RHV 4.4\n1750371 - [Tracking] ovirt-engine-ui-extensions for RHV 4.4\n1750482 - From VM Portal, users cannot create Operating System Windows VM. \n1751215 - Unable to change Graphical Console of HE VM. \n1751268 - add links to Insights to landing page\n1751423 - Improve description of shared memory statistics and remove unimplemented memory metrics from API\n1752890 - Build / Tag ovirt-engine-extension-aaa-ldap for RHV 4.4 RHEL 8\n1752995 - [RFE] Need to be able to set default console option\n1753629 - Build / Tag ovirt-engine-extension-aaa-misc for RHV 4.4 RHEL 8\n1753661 - Build / Tag ovirt-engine-extension-logger-log4j got RHV 4.4 / RHEl 8\n1753664 - Build ovirt-fast-forward-upgrade for RHV 4.4 /RHEL 8 support\n1754363 - [Scale] Engine generates excessive amount of dns configuration related sql queries\n1754490 - RHV Manager cannot start on EAP 7.2.4\n1755412 - Setting \"oreg_url: registry.redhat.io\" fails with error\n1758048 - clone(as thin) VM from template or create snapshot fails with \u0027Requested capacity 1073741824 \u003c parent capacity 3221225472 (volume:1211)\u0027\n1758289 - [Warn] Duplicate chassis entries in southbound database if the host is down while removing the host from Manager\n1762281 - Import of OVA created from template fails with java.lang.NullPointerException\n1763992 - [RFE] Show \"Open Console\" as the main option in the VM actions menu\n1764289 - Document details how each fence agent can be configured in RESTAPI\n1764791 - CVE-2019-17195 nimbus-jose-jwt: Uncaught exceptions while parsing a JWT\n1764932 - [BREW BUILD ENABLER] Build the ansible-runner-service for RHV 4.4\n1764943 - Create Snapshot does not proceed beyond CreateVolume\n1764959 - Apache is configured to offer TRACE method (security)\n1765660 - CVE-2017-18635 novnc: XSS vulnerability via the messages propagated to the status field\n1767319 - [RFE] forbid updating mac pool that contains ranges overlapping with any mac range in the system\n1767483 - CVE-2019-10086 apache-commons-beanutils: does not suppresses the class property in PropertyUtilsBean by default\n1768707 - Cannot set or update iscsi portal group tag when editing storage connection via API\n1768844 - RHEL Advanced virtualization module streams support\n1769463 - [Scale] Slow performance for api/clusters when many networks devices are present\n1770237 - Cannot assign a vNIC profile for VM instance profile. \n1771793 - VM Portal crashes in what appears to be a permission related problem. \n1773313 - RHV Metric store installation fails with error: \"You need to install \\\"jmespath\\\" prior to running json_query filter\"\n1777954 - VM Templates greater then 101 quantity are not listed/reported in RHV-M Webadmin UI. \n1779580 - drop rhvm-doc package\n1781001 - CVE-2019-19336 ovirt-engine: response_type parameter allows reflected XSS\n1782236 - Windows Update (the drivers) enablement\n1782279 - Warning message for low space is not received on Imported Storage domain\n1782882 - qemu-kvm: kvm_init_vcpu failed: Function not implemented\n1784049 - Rhel6 guest with cluster default q35 chipset causes kernel panic\n1784385 - Still requiring rhvm-doc in rhvm-setup-plugins\n1785750 - [RFE] Ability to change default VM action (Suspend) in the VM Portal. \n1788424 - Importing a VM having direct LUN attached using virtio driver is failing with error \"VirtIO-SCSI is disabled for the VM\"\n1796809 - Build apache-sshd for RHV 4.4 RHEL 8\n1796811 - Remove bundled apache-sshd library\n1796815 - Build snmp4j for RHV 4.4 RHEL 8\n1796817 - Remove bundled snmp4j library\n1797316 - Snapshot creation from VM fails on second snapshot and afterwords\n1797500 - Add disk operation failed to complete. \n1798114 - Build apache-commons-digester for RHV 4.4 RHEL 8\n1798117 - Build apache-commons-configuration for RHV 4.4 RHEL 8\n1798120 - Build apache-commons-jexl for RHV 4.4 RHEL 8\n1798127 - Build apache-commons-collections4 for RHV 4.4 RHEL 8\n1798137 - Build apache-commons-vfs for RHV 4.4 RHEL 8\n1799171 - Build ws-commons-util for RHV 4.4 RHEL 8\n1799204 - Build xmlrpc for RHV 4.4 RHEL 8\n1801149 - CVE-2019-13990 libquartz: XXE attacks via job description\n1801709 - Disable activation of the host while Enroll certificate flow is still in progress\n1803597 - rhv-image-discrepancies should skip storage domains in maintenance mode and ISO/Export\n1805669 - change requirement on rhvm package from spice-client-msi to spice-client-win\n1806276 - [HE] ovirt-provider-ovn is non-functional on 4.3.9 Hosted-Engine\n1807047 - Build m2crypto for RHV 4.4 RHEL 8\n1807860 - [RFE] Allow resource allocation options to be customized\n1808096 - Uploading ISOs causes \"Uncaught exception occurred. Please try reloading the page. Details: (TypeError) : a.n is null\"\n1808126 - host_service.install() does not work with deploy_hosted_engine as True. \n1809040 - [CNV\u0026RHV] let the user know that token is not valid anymore\n1809052 - [CNV\u0026RHV] ovirt-engine log file spammed by failed timers ( approx 3-5 messages/sec )\n1809875 - rhv-image-discrepancies only compares images on the last DC\n1809877 - rhv-image-discrepancies sends dump-volume-chains with parameter that is ignored\n1810893 - mountOptions is ignored for \"import storage domain\" from GUI\n1811865 - [Scale] Host Monitoring generates excessive amount of qos related sql queries\n1811869 - [Scale] Webadmin\\REST for host interface list response time is too long because of excessive amount of qos related sql queries\n1812875 - Unable to create VMs when french Language is selected for the rhvm gui. \n1813305 - Engine updating SLA policies of VMs continuously in an environment which is not having any QOS configured\n1813344 - CVE-2020-7598 nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or __proto__ payload\n1814197 - [CNV\u0026RHV] when provider is remover DC is left behind and active\n1814215 - [CNV\u0026RHV] Adding new provider to engine fails after succesfull test\n1816017 - Build log4j12 for RHV 4.4 EL8\n1816643 - [CNV\u0026RHV] VM created in CNV not visible in RHV\n1816654 - [CNV\u0026RHV] adding provider with already created vm failed\n1816693 - [CNV\u0026RHV] CNV VM failed to restart even if 1st dialog looks fine\n1816739 - [CNV\u0026RHV] CNV VM updated form CNV side doesn\u0027t update vm properties over on RHV side\n1817467 - [Tracking] Migration path between RHV 4.3 and 4.4\n1818745 - rhv-log-collector-analyzer 0.2.17 still requires pyhton2\n1819201 - [CodeChange][i18n] oVirt 4.4 rhv branding - translation update\n1819248 - Cannot upgrade host after engine setup\n1819514 - Failed to register 4.4 host to the latest engine (4.4.0-0.29.master.el8ev)\n1819960 - NPE on ImportVmTemplateFromConfigurationCommand when creating VM from ovf_data\n1820621 - Build apache-commons-compress for RHV 4.4 EL8\n1820638 - Build apache-commons-jxpath for RHV 4.4 EL8\n1821164 - Failed snapshot creation can cause data corruption of other VMs\n1821930 - Enable only TLSv1.2+ protocol for SPICE on EL7 hosts\n1824095 - VM portal shows only error\n1825793 - RHV branding is missing after upgrade from 4.3\n1826248 - [4.4][ovirt-cockpit-sso] Compatibility issues with python3\n1826437 - The console client resources page return HTTP code 500\n1826801 - [CNV\u0026RHV] update of memory on cnv side does not propagate to rhv\n1826855 - [cnv\u0026rhv] update of cpu on cnv side causing expetion in engine.log\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1828669 - After SPM select the engine lost communication to all hosts until restarted [improved logging]\n1828736 - [CNV\u0026RHV] cnv template is not propagated to rhv\n1829189 - engine-setup httpd ssl configuration conflicts with Red Hat Insights\n1829656 - Failed to register 4.3 host to 4.4 engine with 4.3 cluster (4.4.0-0.33.master.el8ev)\n1829830 - vhost custom properties does not accept \u0027-\u0027\n1832161 - rhv-log-collector-analyzer fails with UnicodeDecodeError on RHEL8\n1834523 - Edit VM -\u003e Enable Smartcard sharing does not stick when VM is running\n1838493 - Live snapshot made with freeze in the engine will cause the FS to be frozen\n1841495 - Upgrade openstack-java-sdk to 3.2.9\n1842495 - high cpu usage after entering wrong search pattern in RHVM\n1844270 - [vGPU] nodisplay option for mdev broken since mdev scheduling unit\n1844855 - Missing images (favicon.ico, banner logo) and missing brand.css file on VM portal d/s installation\n1845473 - Exporting an OVA file from a VM results in its ovf file having a format of RAW when the disk is COW\n1847420 - CVE-2020-10775 ovirt-engine: Redirect to arbitrary URL allows for phishing\n1850004 - CVE-2020-11023 jQuery: passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1853444 - [CodeChange][i18n] oVirt 4.4 rhv branding - translation update (July-2020)\n1854563 - [4.4 downstream only][RFE] Include a link to grafana on front page\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\nansible-runner-1.4.5-1.el8ar.src.rpm\nansible-runner-service-1.0.2-1.el8ev.src.rpm\napache-commons-collections4-4.4-1.el8ev.src.rpm\napache-commons-compress-1.18-1.el8ev.src.rpm\napache-commons-configuration-1.10-1.el8ev.src.rpm\napache-commons-jexl-2.1.1-1.el8ev.src.rpm\napache-commons-jxpath-1.3-29.el8ev.src.rpm\napache-commons-vfs-2.4.1-1.el8ev.src.rpm\napache-sshd-2.5.1-1.el8ev.src.rpm\nebay-cors-filter-1.0.1-4.el8ev.src.rpm\ned25519-java-0.3.0-1.el8ev.src.rpm\nengine-db-query-1.6.1-1.el8ev.src.rpm\njava-client-kubevirt-0.5.0-1.el8ev.src.rpm\nlog4j12-1.2.17-22.el8ev.src.rpm\nm2crypto-0.35.2-5.el8ev.src.rpm\nmakeself-2.4.0-4.el8ev.src.rpm\nnovnc-1.1.0-1.el8ost.src.rpm\nopenstack-java-sdk-3.2.9-1.el8ev.src.rpm\novirt-cockpit-sso-0.1.4-1.el8ev.src.rpm\novirt-engine-4.4.1.8-0.7.el8ev.src.rpm\novirt-engine-api-explorer-0.0.6-1.el8ev.src.rpm\novirt-engine-dwh-4.4.1.2-1.el8ev.src.rpm\novirt-engine-extension-aaa-jdbc-1.2.0-1.el8ev.src.rpm\novirt-engine-extension-aaa-ldap-1.4.0-1.el8ev.src.rpm\novirt-engine-extension-aaa-misc-1.1.0-1.el8ev.src.rpm\novirt-engine-extension-logger-log4j-1.1.0-1.el8ev.src.rpm\novirt-engine-extensions-api-1.0.1-1.el8ev.src.rpm\novirt-engine-metrics-1.4.1.1-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.2.2-1.el8ev.src.rpm\novirt-fast-forward-upgrade-1.1.6-0.el8ev.src.rpm\novirt-log-collector-4.4.2-1.el8ev.src.rpm\novirt-scheduler-proxy-0.1.9-1.el8ev.src.rpm\novirt-web-ui-1.6.3-1.el8ev.src.rpm\npython-aniso8601-0.82-4.el8ost.src.rpm\npython-flask-1.0.2-2.el8ost.src.rpm\npython-flask-restful-0.3.6-8.el8ost.src.rpm\npython-netaddr-0.7.19-8.1.el8ost.src.rpm\npython-notario-0.0.16-2.el8cp.src.rpm\npython-ovsdbapp-0.17.1-0.20191216120142.206cf14.el8ost.src.rpm\npython-pbr-5.1.2-2.el8ost.src.rpm\npython-six-1.12.0-1.el8ost.src.rpm\npython-websocket-client-0.54.0-1.el8ost.src.rpm\npython-werkzeug-0.16.0-1.el8ost.src.rpm\nrhv-log-collector-analyzer-1.0.2-1.el8ev.src.rpm\nrhvm-branding-rhv-4.4.4-1.el8ev.src.rpm\nrhvm-dependencies-4.4.0-1.el8ev.src.rpm\nrhvm-setup-plugins-4.4.2-1.el8ev.src.rpm\nsnmp4j-2.4.1-1.el8ev.src.rpm\nunboundid-ldapsdk-4.0.14-1.el8ev.src.rpm\nvdsm-jsonrpc-java-1.5.4-1.el8ev.src.rpm\nws-commons-util-1.0.2-1.el8ev.src.rpm\nxmlrpc-3.1.3-1.el8ev.src.rpm\n\nnoarch:\nansible-runner-1.4.5-1.el8ar.noarch.rpm\nansible-runner-service-1.0.2-1.el8ev.noarch.rpm\napache-commons-collections4-4.4-1.el8ev.noarch.rpm\napache-commons-collections4-javadoc-4.4-1.el8ev.noarch.rpm\napache-commons-compress-1.18-1.el8ev.noarch.rpm\napache-commons-compress-javadoc-1.18-1.el8ev.noarch.rpm\napache-commons-configuration-1.10-1.el8ev.noarch.rpm\napache-commons-jexl-2.1.1-1.el8ev.noarch.rpm\napache-commons-jexl-javadoc-2.1.1-1.el8ev.noarch.rpm\napache-commons-jxpath-1.3-29.el8ev.noarch.rpm\napache-commons-jxpath-javadoc-1.3-29.el8ev.noarch.rpm\napache-commons-vfs-2.4.1-1.el8ev.noarch.rpm\napache-commons-vfs-ant-2.4.1-1.el8ev.noarch.rpm\napache-commons-vfs-examples-2.4.1-1.el8ev.noarch.rpm\napache-commons-vfs-javadoc-2.4.1-1.el8ev.noarch.rpm\napache-sshd-2.5.1-1.el8ev.noarch.rpm\napache-sshd-javadoc-2.5.1-1.el8ev.noarch.rpm\nebay-cors-filter-1.0.1-4.el8ev.noarch.rpm\ned25519-java-0.3.0-1.el8ev.noarch.rpm\ned25519-java-javadoc-0.3.0-1.el8ev.noarch.rpm\nengine-db-query-1.6.1-1.el8ev.noarch.rpm\njava-client-kubevirt-0.5.0-1.el8ev.noarch.rpm\nlog4j12-1.2.17-22.el8ev.noarch.rpm\nlog4j12-javadoc-1.2.17-22.el8ev.noarch.rpm\nmakeself-2.4.0-4.el8ev.noarch.rpm\nnovnc-1.1.0-1.el8ost.noarch.rpm\nopenstack-java-ceilometer-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-ceilometer-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-cinder-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-cinder-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-glance-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-glance-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-heat-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-heat-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-javadoc-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-keystone-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-keystone-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-nova-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-nova-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-quantum-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-quantum-model-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-resteasy-connector-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-swift-client-3.2.9-1.el8ev.noarch.rpm\nopenstack-java-swift-model-3.2.9-1.el8ev.noarch.rpm\novirt-cockpit-sso-0.1.4-1.el8ev.noarch.rpm\novirt-engine-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-api-explorer-0.0.6-1.el8ev.noarch.rpm\novirt-engine-backend-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-dbscripts-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-dwh-4.4.1.2-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.4.1.2-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.4.1.2-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-jdbc-1.2.0-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-1.4.0-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-setup-1.4.0-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-misc-1.1.0-1.el8ev.noarch.rpm\novirt-engine-extension-logger-log4j-1.1.0-1.el8ev.noarch.rpm\novirt-engine-extensions-api-1.0.1-1.el8ev.noarch.rpm\novirt-engine-extensions-api-javadoc-1.0.1-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-metrics-1.4.1.1-1.el8ev.noarch.rpm\novirt-engine-restapi-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-base-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-tools-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-tools-backup-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.2.2-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.4.1.8-0.7.el8ev.noarch.rpm\novirt-fast-forward-upgrade-1.1.6-0.el8ev.noarch.rpm\novirt-log-collector-4.4.2-1.el8ev.noarch.rpm\novirt-scheduler-proxy-0.1.9-1.el8ev.noarch.rpm\novirt-web-ui-1.6.3-1.el8ev.noarch.rpm\npython-flask-doc-1.0.2-2.el8ost.noarch.rpm\npython2-netaddr-0.7.19-8.1.el8ost.noarch.rpm\npython2-pbr-5.1.2-2.el8ost.noarch.rpm\npython2-six-1.12.0-1.el8ost.noarch.rpm\npython3-aniso8601-0.82-4.el8ost.noarch.rpm\npython3-ansible-runner-1.4.5-1.el8ar.noarch.rpm\npython3-flask-1.0.2-2.el8ost.noarch.rpm\npython3-flask-restful-0.3.6-8.el8ost.noarch.rpm\npython3-netaddr-0.7.19-8.1.el8ost.noarch.rpm\npython3-notario-0.0.16-2.el8cp.noarch.rpm\npython3-ovirt-engine-lib-4.4.1.8-0.7.el8ev.noarch.rpm\npython3-ovsdbapp-0.17.1-0.20191216120142.206cf14.el8ost.noarch.rpm\npython3-pbr-5.1.2-2.el8ost.noarch.rpm\npython3-six-1.12.0-1.el8ost.noarch.rpm\npython3-websocket-client-0.54.0-1.el8ost.noarch.rpm\npython3-werkzeug-0.16.0-1.el8ost.noarch.rpm\npython3-werkzeug-doc-0.16.0-1.el8ost.noarch.rpm\nrhv-log-collector-analyzer-1.0.2-1.el8ev.noarch.rpm\nrhvm-4.4.1.8-0.7.el8ev.noarch.rpm\nrhvm-branding-rhv-4.4.4-1.el8ev.noarch.rpm\nrhvm-dependencies-4.4.0-1.el8ev.noarch.rpm\nrhvm-setup-plugins-4.4.2-1.el8ev.noarch.rpm\nsnmp4j-2.4.1-1.el8ev.noarch.rpm\nsnmp4j-javadoc-2.4.1-1.el8ev.noarch.rpm\nunboundid-ldapsdk-4.0.14-1.el8ev.noarch.rpm\nunboundid-ldapsdk-javadoc-4.0.14-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-1.5.4-1.el8ev.noarch.rpm\nws-commons-util-1.0.2-1.el8ev.noarch.rpm\nws-commons-util-javadoc-1.0.2-1.el8ev.noarch.rpm\nxmlrpc-client-3.1.3-1.el8ev.noarch.rpm\nxmlrpc-common-3.1.3-1.el8ev.noarch.rpm\nxmlrpc-javadoc-3.1.3-1.el8ev.noarch.rpm\nxmlrpc-server-3.1.3-1.el8ev.noarch.rpm\n\nx86_64:\nm2crypto-debugsource-0.35.2-5.el8ev.x86_64.rpm\npython3-m2crypto-0.35.2-5.el8ev.x86_64.rpm\npython3-m2crypto-debuginfo-0.35.2-5.el8ev.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2017-18635\nhttps://access.redhat.com/security/cve/CVE-2019-8331\nhttps://access.redhat.com/security/cve/CVE-2019-10086\nhttps://access.redhat.com/security/cve/CVE-2019-13990\nhttps://access.redhat.com/security/cve/CVE-2019-17195\nhttps://access.redhat.com/security/cve/CVE-2019-19336\nhttps://access.redhat.com/security/cve/CVE-2020-7598\nhttps://access.redhat.com/security/cve/CVE-2020-10775\nhttps://access.redhat.com/security/cve/CVE-2020-11022\nhttps://access.redhat.com/security/cve/CVE-2020-11023\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/technical_notes\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXylir9zjgjWX9erEAQii/A//bJm3u0+ul+LdQwttSJJ79OdVqcp3FktP\ntdPj8AFbB6F9KkuX9FAQja0/2pgZAldB3Eyz57GYTxyDD1qeMqYSayGHCH01GWAn\nu8uF90lcSz6YvgEPDh1mWhLYQMfdWT6IUuKOEHldt8TyHbc7dX3xCbsLDzNCxGbl\nQuPSFPQBJaAXETSw42NGzdUzaM9zoQ0Mngj+Owcgw53YyBy3BSLAb5bKuijvkcLy\nSVCAxxiQ89E+cnETKYIv4dOfqXGA5wLg68hDmUQyFcXHA9nQbJM9Q0s1fbZ2Wav1\noGGTqJDTgVElxrHB5pYJ6pu484ZgJealkBCrHA2OBsMJUadwitVvQLXFZF5OyN0N\nf/vtZ1ua4mZADa61qfnlmVRiyISwmPPWIOImA3TIE5Q8Yl5ucCqtDjQPoJAbXsUl\nY22Bb5x7JyrN0nyOgwh6BGGK51CmOaP+xNuWD7osI24pnzdmPTZuJrZLePxgPgac\nWWQNznzvokknva2ofvujAm+DEl+W7W3A8Vs9wkmUWYlaVC7GFLEkcvQjjHahZ7kh\ndVJNoh70vpA+aJCMQHYK6MGtCSAWoqXkRTsHb3Stfm2vLLz6GYxY5OuvB7Z0ME1N\nzCiFjBla5+3nKx5ab8Pola56T1wRULHL6zYN9GTsOzxjdJsKHXBVeV8OYcnoHiza\n2TrKn2dtZwI=\n=92Q3\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nSee the following documentation, which will be updated shortly for release\n3.11.219, for important instructions on how to upgrade your cluster and\nfully\napply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_r\nelease_notes.html\n\nThis update is available via the Red Hat Network. Bugs fixed (https://bugzilla.redhat.com/):\n\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n\n6. You can also manage\nuser accounts for web applications, mobile applications, and RESTful web\nservices. Description:\n\nRed Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak\nproject, that provides authentication and standards-based single sign-on\ncapabilities for web and mobile applications. \n\nSecurity Fix(es):\n\n* jquery: Prototype pollution in object\u0027s prototype leading to denial of\nservice, remote code execution, or property injection (CVE-2019-11358)\n\n* jquery: Cross-site scripting via cross-domain ajax requests\n(CVE-2015-9251)\n\n* bootstrap: Cross-site Scripting (XSS) in the collapse data-parent\nattribute\n(CVE-2018-14040)\n\n* jquery: Untrusted code execution via \u003coption\u003e tag in HTML passed to DOM\nmanipulation methods (CVE-2020-11023)\n\n* jquery: Cross-site scripting due to improper injQuery.htmlPrefilter\nmethod\n(CVE-2020-11022)\n\n* bootstrap: XSS in the data-target attribute (CVE-2016-10735)\n\n* bootstrap: Cross-site Scripting (XSS) in the data-target property of\nscrollspy\n(CVE-2018-14041)\n\n* sshd-common: mina-sshd: Java unsafe deserialization vulnerability\n(CVE-2022-45047)\n\n* woodstox-core: woodstox to serialise XML data was vulnerable to Denial of\nService attacks (CVE-2022-40152)\n\n* bootstrap: Cross-site Scripting (XSS) in the data-container property of\ntooltip (CVE-2018-14042)\n\n* bootstrap: XSS in the tooltip or popover data-template attribute\n(CVE-2019-8331)\n\n* nodejs-moment: Regular expression denial of service (CVE-2017-18214)\n\n* wildfly-elytron: possible timing attacks via use of unsafe comparator\n(CVE-2022-3143)\n\n* jackson-databind: use of deeply nested arrays (CVE-2022-42004)\n\n* jackson-databind: deep wrapper array nesting wrt\nUNWRAP_SINGLE_VALUE_ARRAYS\n(CVE-2022-42003)\n\n* jettison: parser crash by stackoverflow (CVE-2022-40149)\n\n* jettison: memory exhaustion via user-supplied XML or JSON data\n(CVE-2022-40150)\n\n* jettison: If the value in map is the map\u0027s self, the new new\nJSONObject(map) cause StackOverflowError which may lead to dos\n(CVE-2022-45693)\n\n* CXF: Apache CXF: SSRF Vulnerability (CVE-2022-46364)\n\n4. JIRA issues fixed (https://issues.jboss.org/):\n\nJBEAP-23864 - (7.4.z) Upgrade xmlsec from 2.1.7.redhat-00001 to 2.2.3.redhat-00001\nJBEAP-23865 - [GSS](7.4.z) Upgrade Apache CXF from 3.3.13.redhat-00001 to 3.4.10.redhat-00001\nJBEAP-23866 - (7.4.z) Upgrade wss4j from 2.2.7.redhat-00001 to 2.3.3.redhat-00001\nJBEAP-23928 - Tracker bug for the EAP 7.4.9 release for RHEL-9\nJBEAP-24055 - (7.4.z) Upgrade HAL from 3.3.15.Final-redhat-00001 to 3.3.16.Final-redhat-00001\nJBEAP-24081 - (7.4.z) Upgrade Elytron from 1.15.14.Final-redhat-00001 to 1.15.15.Final-redhat-00001\nJBEAP-24095 - (7.4.z) Upgrade elytron-web from 1.9.2.Final-redhat-00001 to 1.9.3.Final-redhat-00001\nJBEAP-24100 - [GSS](7.4.z) Upgrade Undertow from 2.2.20.SP1-redhat-00001 to 2.2.22.SP3-redhat-00001\nJBEAP-24127 - (7.4.z) UNDERTOW-2123 - Update AsyncContextImpl.dispatch to use proper value\nJBEAP-24128 - (7.4.z) Upgrade Hibernate Search from 5.10.7.Final-redhat-00001 to 5.10.13.Final-redhat-00001\nJBEAP-24132 - [GSS](7.4.z) Upgrade Ironjacamar from 1.5.3.SP2-redhat-00001 to 1.5.10.Final-redhat-00001\nJBEAP-24147 - (7.4.z) Upgrade jboss-ejb-client from 4.0.45.Final-redhat-00001 to 4.0.49.Final-redhat-00001\nJBEAP-24167 - (7.4.z) Upgrade WildFly Core from 15.0.19.Final-redhat-00001 to 15.0.21.Final-redhat-00002\nJBEAP-24191 - [GSS](7.4.z) Upgrade remoting from 5.0.26.SP1-redhat-00001 to 5.0.27.Final-redhat-00001\nJBEAP-24195 - [GSS](7.4.z) Upgrade JSF API from 3.0.0.SP06-redhat-00001 to 3.0.0.SP07-redhat-00001\nJBEAP-24207 - (7.4.z) Upgrade Soteria from 1.0.1.redhat-00002 to 1.0.1.redhat-00003\nJBEAP-24248 - (7.4.z) ELY-2492 - Upgrade sshd-common in Elytron from 2.7.0 to 2.9.2\nJBEAP-24426 - (7.4.z) Upgrade Elytron from 1.15.15.Final-redhat-00001 to 1.15.16.Final-redhat-00001\nJBEAP-24427 - (7.4.z) Upgrade WildFly Core from 15.0.21.Final-redhat-00002 to 15.0.22.Final-redhat-00001\n\n7",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11022"
},
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "PACKETSTORM",
"id": "161727"
},
{
"db": "PACKETSTORM",
"id": "158750"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159727"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170819"
}
],
"trust": 1.62
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-11022",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "162159",
"trust": 1.7
},
{
"db": "TENABLE",
"id": "TNS-2021-02",
"trust": 1.7
},
{
"db": "TENABLE",
"id": "TNS-2020-10",
"trust": 1.7
},
{
"db": "TENABLE",
"id": "TNS-2020-11",
"trust": 1.7
},
{
"db": "TENABLE",
"id": "TNS-2021-10",
"trust": 1.7
},
{
"db": "PACKETSTORM",
"id": "161727",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "158750",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "157850",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170823",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159852",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "160274",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170821",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159275",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159353",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168304",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159513",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "158555",
"trust": 0.7
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2020.2694",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0620",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0845",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4248",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3700",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2775",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1066",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2287",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1916",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3485",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0909",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1961",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0583",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3902",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3368",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0585",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2515",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1880",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1863",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1519",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0824",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2375",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0465",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3255",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2966",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5150",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2525",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1804",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3875",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2660",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1925",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1512",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2660.3",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3028",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1653",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022071412",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021042543",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072094",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021101936",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022041931",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042537",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022012403",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021072292",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022022516",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021072721",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022012754",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021042618",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021042302",
"trust": 0.6
},
{
"db": "CXSECURITY",
"id": "WLB-2022060033",
"trust": 0.6
},
{
"db": "EXPLOIT-DB",
"id": "49766",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "157905",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "158406",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "158282",
"trust": 0.6
},
{
"db": "LENOVO",
"id": "LEN-60182",
"trust": 0.6
},
{
"db": "ICS CERT",
"id": "ICSA-22-097-01",
"trust": 0.6
},
{
"db": "NSFOCUS",
"id": "48898",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "171215",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170819",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171214",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171212",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159876",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170817",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-163559",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159727",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171211",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "PACKETSTORM",
"id": "161727"
},
{
"db": "PACKETSTORM",
"id": "158750"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159727"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"id": "VAR-202004-2191",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T22:10:21.285000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "jQuery Fixes for cross-site scripting vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=117510"
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-79",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.3,
"url": "http://packetstormsecurity.com/files/162159/jquery-1.2-cross-site-scripting.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpujan2021.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpujul2020.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.7,
"url": "https://github.com/jquery/jquery/security/advisories/ghsa-gxr4-xjj5-5px2"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20200511-0006/"
},
{
"trust": 1.7,
"url": "https://www.drupal.org/sa-core-2020-002"
},
{
"trust": 1.7,
"url": "https://www.tenable.com/security/tns-2020-10"
},
{
"trust": 1.7,
"url": "https://www.tenable.com/security/tns-2020-11"
},
{
"trust": 1.7,
"url": "https://www.tenable.com/security/tns-2021-02"
},
{
"trust": 1.7,
"url": "https://www.tenable.com/security/tns-2021-10"
},
{
"trust": 1.7,
"url": "https://www.debian.org/security/2020/dsa-4693"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202007-03"
},
{
"trust": 1.7,
"url": "https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/"
},
{
"trust": 1.7,
"url": "https://github.com/jquery/jquery/commit/1d61fd9407e6fbe82fe55cb0b938307aa0791f77"
},
{
"trust": 1.7,
"url": "https://jquery.com/upgrade-guide/3.5/"
},
{
"trust": 1.7,
"url": "https://www.oracle.com//security-alerts/cpujul2021.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2021/03/msg00033.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00067.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00085.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00039.html"
},
{
"trust": 1.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67%40%3cdev.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rdf44341677cf7eec7e9aa96dcf3f37ed709544863d619cca8c36f133%40%3ccommits.airflow.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.debian.org/debian-lts-announce/2023/08/msg00040.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/voe7p7apprqkd4fgnhbkjpdy6ffcoh3w/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/voe7p7apprqkd4fgnhbkjpdy6ffcoh3w/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rdf44341677cf7eec7e9aa96dcf3f37ed709544863d619cca8c36f133@%3ccommits.airflow.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67@%3cdev.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-11022"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022041931"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161727/red-hat-security-advisory-2021-0778-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159275/red-hat-security-advisory-2020-3807-01.html"
},
{
"trust": 0.6,
"url": "https://www.oracle.com/security-alerts/cpujul2021.html"
},
{
"trust": 0.6,
"url": "https://www.exploit-db.com/exploits/49766"
},
{
"trust": 0.6,
"url": "http://www.nsfocus.net/vulndb/48898"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3875/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-jquery-vulnerabilities-affect-ibm-emptoris-strategic-supply-management-platform-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6520510"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158555/gentoo-linux-security-advisory-202007-03.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-jquery-as-used-by-ibm-qradar-network-packet-capture-is-vulnerable-to-cross-site-scripting-xss-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072292"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-10/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-8/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2375/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1066"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5150"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168304/red-hat-security-advisory-2022-6393-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021042543"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1804/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1925/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021042302"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/160274/red-hat-security-advisory-2020-5249-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072721"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022022516"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/157850/red-hat-security-advisory-2020-2217-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072094"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021101936"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158406/red-hat-security-advisory-2020-2412-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2660.3/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-kenexa-lms-on-premise-all-jquery-publicly-disclosed-vulnerability-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-planning-analytics-workspace-is-affected-by-security-vulnerabilities-3/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-jquery-affect-ibm-wiotp-messagegateway-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1916"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1519"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170821/red-hat-security-advisory-2023-0552-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0585"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159852/red-hat-security-advisory-2020-4847-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2660/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0583"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-license-key-server-administration-and-reporting-tool-is-impacted-by-multiple-vulnerabilities-in-jquery-bootstrap-and-angularjs/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerability-issues-affect-ibm-spectrum-symphony-7-3-1/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-cross-site-scripting-vulnerabilities-in-jquery-might-affect-ibm-business-automation-workflow-and-ibm-business-process-manager-bpm-cve-2020-7656-cve-2020-11022-cve-2020-11023/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3255/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3485/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159513/red-hat-security-advisory-2020-4211-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-kenexa-lcms-premier-on-premise-all-jquery-publicly-disclosed-vulnerability-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4248/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2287/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2966/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/157905/red-hat-security-advisory-2020-2362-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1880/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1653"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2694/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042537"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158282/red-hat-security-advisory-2020-2813-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021042618"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2775/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-jquery-affect-ibm-license-metric-tool-v9/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0824"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-verify-information-queue-uses-a-node-js-package-with-known-vulnerabilities-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1961/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1512"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-cross-site-scripting-vulnerabilities-in-jquery-might-affect-ibm-business-automation-workflow-and-ibm-business-process-manager-bpm-cve-2020-7656-cve-2020-11022-cve-2020-11023-2/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159353/red-hat-security-advisory-2020-3936-01.html"
},
{
"trust": 0.6,
"url": "https://support.lenovo.com/us/en/product_security/len-60182"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilites-affect-ibm-jazz-foundation-and-ibm-engineering-products-5/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3028/"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/issue/wlb-2022060033"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2515"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158750/red-hat-security-advisory-2020-3247-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-jquery-as-used-in-ibm-security-qradar-packet-capture-is-vulnerable-to-cross-site-scripting-xss-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022012754"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0465"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6525182"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-tivoli-netcool-impact-is-affected-by-jquery-vulnerabilities-cve-2020-11022-cve-2020-11023/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-api-connect-is-impacted-by-vulnerabilities-in-drupal-cve-2020-11022-cve-2020-11023/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6490381"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1863/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-api-connect-is-impacted-by-vulnerabilities-in-drupal-cve-2020-11022-cve-2020-11023-2/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-jquery-fixed-in-mobile-foundation-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3700/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022071412"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0909"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-security-vulnerabilities-have-been-fixed-in-ibm-security-identity-manager-virtual-appliance/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3902/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2525"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0620"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022012403"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-jquery-spring-dom4j-mongodb-linux-kernel-targetcli-fb-jackson-node-js-and-apache-commons-affect-ibm-spectrum-protect-plus/"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-097-01"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-security-vulnerability-has-been-identified-in-bigfix-platform-shipped-with-ibm-license-metric-tool-2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3368/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170823/red-hat-security-advisory-2023-0553-01.html"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-11023"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11358"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11358"
},
{
"trust": 0.3,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14042"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14040"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-40150"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-40149"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-45047"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-46364"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-45693"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-14042"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-14040"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-7598"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8331"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8331"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3916"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46175"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0091"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46363"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0264"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1274"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38749"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1274"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12723"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20907"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12401"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20372"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20253"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17006"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11719"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401"
},
{
"trust": 0.1,
"url": "https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17023"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-6829"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12403"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20388"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12723"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11756"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12243"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12400"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11727"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1971"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20180"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-5766"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-5766"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20372"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19956"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17498"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35678"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12402"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13990"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10775"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17195"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/technical_notes"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-18635"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-7598"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:3247"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10086"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10086"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19336"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13990"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17195"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10775"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19336"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:2217"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258."
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_r"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8768"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8535"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15718"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20657"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19126"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12448"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8611"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8203"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-6251"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8676"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-1549"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-9251"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17451"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20060"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-19519"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-7150"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-1547"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-7664"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8607"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12052"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15366"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8690"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20060"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13752"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3822"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11324"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-7146"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-1010204"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11324"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11236"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10739"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-18751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-16890"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5481"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8686"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8671"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12447"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8544"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12049"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8571"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-19519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2013-0169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8677"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5436"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-18624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8595"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13753"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11459"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12447"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8679"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20657"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5094"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-6454"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20483"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14336"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4298"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8622"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-1010180"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8681"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3825"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-18074"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2013-0169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-6237"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-6706"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20483"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20337"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8559"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8687"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13822"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-16769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8672"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14404"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8608"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12449"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-7665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8457"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5953"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8689"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15847"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-14498"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11236"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12245"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-14404"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8726"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-1010204"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8596"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8610"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13636"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-1563"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16890"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11070"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14498"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-7149"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-16056"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10739"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20337"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-18074"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11110"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8584"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19959"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8563"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10531"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-1010180"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12449"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10715"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8609"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9283"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8587"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-18751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8506"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-18624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8583"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-9251"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12448"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11008"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11459"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8597"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1047"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-21843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4039"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-37603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-21835"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4137"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1044"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0554"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3143"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2015-9251"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/installation_guide/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14041"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40150"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10735"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18214"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40152"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40149"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40152"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-9251"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-14041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-18214"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3143"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "PACKETSTORM",
"id": "161727"
},
{
"db": "PACKETSTORM",
"id": "158750"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159727"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "PACKETSTORM",
"id": "161727"
},
{
"db": "PACKETSTORM",
"id": "158750"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159727"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2020-04-29T00:00:00",
"db": "VULHUB",
"id": "VHN-163559"
},
{
"date": "2021-03-09T16:25:11",
"db": "PACKETSTORM",
"id": "161727"
},
{
"date": "2020-08-04T14:26:33",
"db": "PACKETSTORM",
"id": "158750"
},
{
"date": "2020-05-28T16:07:33",
"db": "PACKETSTORM",
"id": "157850"
},
{
"date": "2020-10-27T16:59:02",
"db": "PACKETSTORM",
"id": "159727"
},
{
"date": "2023-03-02T15:19:44",
"db": "PACKETSTORM",
"id": "171215"
},
{
"date": "2023-03-02T15:19:02",
"db": "PACKETSTORM",
"id": "171211"
},
{
"date": "2023-01-31T17:19:24",
"db": "PACKETSTORM",
"id": "170819"
},
{
"date": "2020-04-29T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2429"
},
{
"date": "2020-04-29T22:15:11.903000",
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-07-25T00:00:00",
"db": "VULHUB",
"id": "VHN-163559"
},
{
"date": "2023-03-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2429"
},
{
"date": "2023-11-07T03:14:27.330000",
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "jQuery Cross-site scripting vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
}
],
"trust": 0.6
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "xss",
"sources": [
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429"
}
],
"trust": 0.7
}
}
VAR-202210-1888
Vulnerability from variot - Updated: 2024-07-23 21:58When doing HTTP(S) transfers, libcurl might erroneously use the read callback (CURLOPT_READFUNCTION) to ask for data to send, even when the CURLOPT_POSTFIELDS option has been set, if the same handle previously was used to issue a PUT request which used that callback. This flaw may surprise the application and cause it to misbehave and either send off the wrong data or use memory after free or similar in the subsequent POST request. The problem exists in the logic for a reused handle when it is changed from a PUT to a POST. Haxx of cURL Products from other vendors have vulnerabilities related to resource disclosure to the wrong domain.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. (CVE-2022-42915). -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: curl security update Advisory ID: RHSA-2023:4139-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:4139 Issue date: 2023-07-18 CVE Names: CVE-2022-32221 CVE-2023-23916 =====================================================================
- Summary:
An update for curl is now available for Red Hat Enterprise Linux 9.0 Extended Update Support.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. Relevant releases/architectures:
Red Hat Enterprise Linux AppStream EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: POST following PUT confusion (CVE-2022-32221)
-
curl: HTTP multi-header compression denial of service (CVE-2023-23916)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2135411 - CVE-2022-32221 curl: POST following PUT confusion 2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service
- Package List:
Red Hat Enterprise Linux AppStream EUS (v.9.0):
aarch64: curl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm curl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-devel-7.76.1-14.el9_0.6.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm
ppc64le: curl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm curl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-devel-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm
s390x: curl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm curl-debugsource-7.76.1-14.el9_0.6.s390x.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-devel-7.76.1-14.el9_0.6.s390x.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm
x86_64: curl-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm curl-debugsource-7.76.1-14.el9_0.6.i686.rpm curl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-devel-7.76.1-14.el9_0.6.i686.rpm libcurl-devel-7.76.1-14.el9_0.6.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm
Red Hat Enterprise Linux BaseOS EUS (v.9.0):
Source: curl-7.76.1-14.el9_0.6.src.rpm
aarch64: curl-7.76.1-14.el9_0.6.aarch64.rpm curl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm curl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm curl-minimal-7.76.1-14.el9_0.6.aarch64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-7.76.1-14.el9_0.6.aarch64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-minimal-7.76.1-14.el9_0.6.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm
ppc64le: curl-7.76.1-14.el9_0.6.ppc64le.rpm curl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm curl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm curl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm
s390x: curl-7.76.1-14.el9_0.6.s390x.rpm curl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm curl-debugsource-7.76.1-14.el9_0.6.s390x.rpm curl-minimal-7.76.1-14.el9_0.6.s390x.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-7.76.1-14.el9_0.6.s390x.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-minimal-7.76.1-14.el9_0.6.s390x.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm
x86_64: curl-7.76.1-14.el9_0.6.x86_64.rpm curl-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm curl-debugsource-7.76.1-14.el9_0.6.i686.rpm curl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm curl-minimal-7.76.1-14.el9_0.6.x86_64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-7.76.1-14.el9_0.6.i686.rpm libcurl-7.76.1-14.el9_0.6.x86_64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-minimal-7.76.1-14.el9_0.6.i686.rpm libcurl-minimal-7.76.1-14.el9_0.6.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32221 https://access.redhat.com/security/cve/CVE-2023-23916 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. ========================================================================== Ubuntu Security Notice USN-5702-1 October 26, 2022
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.10
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in curl.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
Robby Simpson discovered that curl incorrectly handled certain POST operations after PUT operations. (CVE-2022-32221)
Hiroki Kurosawa discovered that curl incorrectly handled parsing .netrc files. If an attacker were able to provide a specially crafted .netrc file, this issue could cause curl to crash, resulting in a denial of service. This issue only affected Ubuntu 22.10. (CVE-2022-35260)
It was discovered that curl incorrectly handled certain HTTP proxy return codes. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. This issue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42915)
Hiroki Kurosawa discovered that curl incorrectly handled HSTS support when certain hostnames included IDN characters. A remote attacker could possibly use this issue to cause curl to use unencrypted connections. This issue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42916)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.10: curl 7.85.0-1ubuntu0.1 libcurl3-gnutls 7.85.0-1ubuntu0.1 libcurl3-nss 7.85.0-1ubuntu0.1 libcurl4 7.85.0-1ubuntu0.1
Ubuntu 22.04 LTS: curl 7.81.0-1ubuntu1.6 libcurl3-gnutls 7.81.0-1ubuntu1.6 libcurl3-nss 7.81.0-1ubuntu1.6 libcurl4 7.81.0-1ubuntu1.6
Ubuntu 20.04 LTS: curl 7.68.0-1ubuntu2.14 libcurl3-gnutls 7.68.0-1ubuntu2.14 libcurl3-nss 7.68.0-1ubuntu2.14 libcurl4 7.68.0-1ubuntu2.14
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.21 libcurl3-gnutls 7.58.0-2ubuntu3.21 libcurl3-nss 7.58.0-2ubuntu3.21 libcurl4 7.58.0-2ubuntu3.21
In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2023-01-23-5 macOS Monterey 12.6.3
macOS Monterey 12.6.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213604.
AppleMobileFileIntegrity Available for: macOS Monterey Impact: An app may be able to access user-sensitive data Description: This issue was addressed by enabling hardened runtime. CVE-2023-23499: Wojciech Reguła (@_r3ggi) of SecuRing (wojciechregula.blog)
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.86.0. CVE-2022-42915 CVE-2022-42916 CVE-2022-32221 CVE-2022-35260
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.85.0. CVE-2022-35252
dcerpc Available for: macOS Monterey Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco Talos
DiskArbitration Available for: macOS Monterey Impact: An encrypted volume may be unmounted and remounted by a different user without prompting for the password Description: A logic issue was addressed with improved state management. CVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)
DriverKit Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved checks. CVE-2022-32915: Tommy Muir (@Muirey03)
Intel Graphics Driver Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2023-23507: an anonymous researcher
Kernel Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2023-23504: Adam Doupé of ASU SEFCOM
Kernel Available for: macOS Monterey Impact: An app may be able to determine kernel memory layout Description: An information disclosure issue was addressed by removing the vulnerable code. CVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. Ltd. (@starlabs_sg)
PackageKit Available for: macOS Monterey Impact: An app may be able to gain root privileges Description: A logic issue was addressed with improved state management. CVE-2023-23497: Mickey Jin (@patch1t)
Screen Time Available for: macOS Monterey Impact: An app may be able to access information about a user’s contacts Description: A privacy issue was addressed with improved private data redaction for log entries. CVE-2023-23505: Wojciech Regula of SecuRing (wojciechregula.blog)
Weather Available for: macOS Monterey Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an anonymous researcher
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: The issue was addressed with improved memory handling. WebKit Bugzilla: 248268 CVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE WebKit Bugzilla: 248268 CVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE
Windows Installer Available for: macOS Monterey Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23508: Mickey Jin (@patch1t)
Additional recognition
Kernel We would like to acknowledge Nick Stenning of Replicate for their assistance.
macOS Monterey 12.6.3 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmPPIl8ACgkQ4RjMIDke NxlXeg/+JwvCu4zHuDTptxkKw1gxht0hTTVJvNjgNWj2XFtnOz9kCIupGj/xPTMl P9vABQWRbpEJfCJE31FbUcxRCMcr/jm8dDb1ocx4qsGLlY5iGLQ/M1G6OLxQVajg gGaCSMjW9Zk7l7sXztKj4XcirsB3ft9tiRJwgPUE0znaT70970usFdg+q95CzODm DHVoa5VNLi0zA4178CIiq4WayAZe90cRYYQQ1+Okjhab/U/blfGgEPhA/rrdjQ85 J4NKTXyGBIWl+Ix4HpLikYpnwm/TKyYiY+MogZ6xUmFwnUgXkPCc4gYdPANQk064 KNjy90yq3Os9IBjfDpw+Pqs6I3GMZ0oNYUKWO+45/0NVp5/qjDFt5K7ZS+Xz5Py7 YrodbwaYiESzsdfLja9ILf8X7taDLHxxfHEvWcXnhcMD1XNKU6mpsb8SOLicWYzp 8maZarjhzQl3dQi4Kz3vQk0hKHTIE6/04fRdDpqhM9WXljayLLO7bsryW8u98Y/b fR3BXgfsll+QjdLDeW3nfuY+q2JsW0a2lhJZnxuRQPC+wUGDoCY7vcCbv8zkx0oo y1w8VDBdUjj7vyVSAqoZNlpgl1ebKgciVhTvrgTsyVxuOA94VzDCeI5/6RkjDAJ+ WL2Em8qc4aqXvGEwimKdNkETbyqIRcNVXWhXLVGLsmHvDViVjGQ= =BbMS -----END PGP SIGNATURE-----
.
Software Description: - mysql-8.0: MySQL database - mysql-5.7: MySQL database
Details:
Multiple security issues were discovered in MySQL and this update includes new upstream MySQL versions to fix these issues.
In addition to security fixes, the updated packages contain bug fixes, new features, and possibly incompatible changes. In general, a standard system update will make all the necessary changes.
For the stable distribution (bullseye), these problems have been fixed in version 7.74.0-1.3+deb11u5. This update also revises the fix for CVE-2022-27774 released in DSA-5197-1.
We recommend that you upgrade your curl packages
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202210-1888",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.86.0"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.3"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "gnu/linux",
"scope": null,
"trust": 0.8,
"vendor": "debian",
"version": null
},
{
"model": "curl",
"scope": null,
"trust": 0.8,
"vendor": "haxx",
"version": null
},
{
"model": "h410s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "ontap",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": "12.6.3"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.86.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.6.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Ubuntu",
"sources": [
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170729"
}
],
"trust": 0.3
},
"cve": "CVE-2022-32221",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 9.8,
"baseSeverity": "Critical",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2022-32221",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-32221",
"trust": 1.8,
"value": "CRITICAL"
},
{
"author": "CNNVD",
"id": "CNNVD-202210-2214",
"trust": 0.6,
"value": "CRITICAL"
}
]
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "When doing HTTP(S) transfers, libcurl might erroneously use the read callback (`CURLOPT_READFUNCTION`) to ask for data to send, even when the `CURLOPT_POSTFIELDS` option has been set, if the same handle previously was used to issue a `PUT` request which used that callback. This flaw may surprise the application and cause it to misbehave and either send off the wrong data or use memory after free or similar in the subsequent `POST` request. The problem exists in the logic for a reused handle when it is changed from a PUT to a POST. Haxx of cURL Products from other vendors have vulnerabilities related to resource disclosure to the wrong domain.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. (CVE-2022-42915). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: curl security update\nAdvisory ID: RHSA-2023:4139-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:4139\nIssue date: 2023-07-18\nCVE Names: CVE-2022-32221 CVE-2023-23916 \n=====================================================================\n\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 9.0\nExtended Update Support. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: POST following PUT confusion (CVE-2022-32221)\n\n* curl: HTTP multi-header compression denial of service (CVE-2023-23916)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream EUS (v.9.0):\n\naarch64:\ncurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\n\nppc64le:\ncurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\n\ns390x:\ncurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\n\nx86_64:\ncurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.i686.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS EUS (v.9.0):\n\nSource:\ncurl-7.76.1-14.el9_0.6.src.rpm\n\naarch64:\ncurl-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-minimal-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\n\nppc64le:\ncurl-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\n\ns390x:\ncurl-7.76.1-14.el9_0.6.s390x.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.s390x.rpm\ncurl-minimal-7.76.1-14.el9_0.6.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\n\nx86_64:\ncurl-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.i686.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-minimal-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32221\nhttps://access.redhat.com/security/cve/CVE-2023-23916\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. ==========================================================================\nUbuntu Security Notice USN-5702-1\nOctober 26, 2022\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.10\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in curl. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nRobby Simpson discovered that curl incorrectly handled certain POST\noperations after PUT operations. \n(CVE-2022-32221)\n\nHiroki Kurosawa discovered that curl incorrectly handled parsing .netrc\nfiles. If an attacker were able to provide a specially crafted .netrc file,\nthis issue could cause curl to crash, resulting in a denial of service. \nThis issue only affected Ubuntu 22.10. (CVE-2022-35260)\n\nIt was discovered that curl incorrectly handled certain HTTP proxy return\ncodes. A remote attacker could use this issue to cause curl to crash,\nresulting in a denial of service, or possibly execute arbitrary code. This\nissue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42915)\n\nHiroki Kurosawa discovered that curl incorrectly handled HSTS support\nwhen certain hostnames included IDN characters. A remote attacker could\npossibly use this issue to cause curl to use unencrypted connections. This\nissue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42916)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.10:\n curl 7.85.0-1ubuntu0.1\n libcurl3-gnutls 7.85.0-1ubuntu0.1\n libcurl3-nss 7.85.0-1ubuntu0.1\n libcurl4 7.85.0-1ubuntu0.1\n\nUbuntu 22.04 LTS:\n curl 7.81.0-1ubuntu1.6\n libcurl3-gnutls 7.81.0-1ubuntu1.6\n libcurl3-nss 7.81.0-1ubuntu1.6\n libcurl4 7.81.0-1ubuntu1.6\n\nUbuntu 20.04 LTS:\n curl 7.68.0-1ubuntu2.14\n libcurl3-gnutls 7.68.0-1ubuntu2.14\n libcurl3-nss 7.68.0-1ubuntu2.14\n libcurl4 7.68.0-1ubuntu2.14\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.21\n libcurl3-gnutls 7.58.0-2ubuntu3.21\n libcurl3-nss 7.58.0-2ubuntu3.21\n libcurl4 7.58.0-2ubuntu3.21\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2023-01-23-5 macOS Monterey 12.6.3\n\nmacOS Monterey 12.6.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213604. \n\nAppleMobileFileIntegrity\nAvailable for: macOS Monterey\nImpact: An app may be able to access user-sensitive data\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2023-23499: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n(wojciechregula.blog)\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.86.0. \nCVE-2022-42915\nCVE-2022-42916\nCVE-2022-32221\nCVE-2022-35260\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.85.0. \nCVE-2022-35252\n\ndcerpc\nAvailable for: macOS Monterey\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco\nTalos\n\nDiskArbitration\nAvailable for: macOS Monterey\nImpact: An encrypted volume may be unmounted and remounted by a\ndifferent user without prompting for the password\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)\n\nDriverKit\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A type confusion issue was addressed with improved\nchecks. \nCVE-2022-32915: Tommy Muir (@Muirey03)\n\nIntel Graphics Driver\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2023-23507: an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23504: Adam Doup\u00e9 of ASU SEFCOM\n\nKernel\nAvailable for: macOS Monterey\nImpact: An app may be able to determine kernel memory layout\nDescription: An information disclosure issue was addressed by\nremoving the vulnerable code. \nCVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. \nLtd. (@starlabs_sg)\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An app may be able to gain root privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23497: Mickey Jin (@patch1t)\n\nScreen Time\nAvailable for: macOS Monterey\nImpact: An app may be able to access information about a user\u2019s\ncontacts\nDescription: A privacy issue was addressed with improved private data\nredaction for log entries. \nCVE-2023-23505: Wojciech Regula of SecuRing (wojciechregula.blog)\n\nWeather\nAvailable for: macOS Monterey\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an\nanonymous researcher\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: The issue was addressed with improved memory handling. \nWebKit Bugzilla: 248268\nCVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\nWebKit Bugzilla: 248268\nCVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\n\nWindows Installer\nAvailable for: macOS Monterey\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23508: Mickey Jin (@patch1t)\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Nick Stenning of Replicate for their\nassistance. \n\nmacOS Monterey 12.6.3 may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmPPIl8ACgkQ4RjMIDke\nNxlXeg/+JwvCu4zHuDTptxkKw1gxht0hTTVJvNjgNWj2XFtnOz9kCIupGj/xPTMl\nP9vABQWRbpEJfCJE31FbUcxRCMcr/jm8dDb1ocx4qsGLlY5iGLQ/M1G6OLxQVajg\ngGaCSMjW9Zk7l7sXztKj4XcirsB3ft9tiRJwgPUE0znaT70970usFdg+q95CzODm\nDHVoa5VNLi0zA4178CIiq4WayAZe90cRYYQQ1+Okjhab/U/blfGgEPhA/rrdjQ85\nJ4NKTXyGBIWl+Ix4HpLikYpnwm/TKyYiY+MogZ6xUmFwnUgXkPCc4gYdPANQk064\nKNjy90yq3Os9IBjfDpw+Pqs6I3GMZ0oNYUKWO+45/0NVp5/qjDFt5K7ZS+Xz5Py7\nYrodbwaYiESzsdfLja9ILf8X7taDLHxxfHEvWcXnhcMD1XNKU6mpsb8SOLicWYzp\n8maZarjhzQl3dQi4Kz3vQk0hKHTIE6/04fRdDpqhM9WXljayLLO7bsryW8u98Y/b\nfR3BXgfsll+QjdLDeW3nfuY+q2JsW0a2lhJZnxuRQPC+wUGDoCY7vcCbv8zkx0oo\ny1w8VDBdUjj7vyVSAqoZNlpgl1ebKgciVhTvrgTsyVxuOA94VzDCeI5/6RkjDAJ+\nWL2Em8qc4aqXvGEwimKdNkETbyqIRcNVXWhXLVGLsmHvDViVjGQ=\n=BbMS\n-----END PGP SIGNATURE-----\n\n\n. \n\nSoftware Description:\n- mysql-8.0: MySQL database\n- mysql-5.7: MySQL database\n\nDetails:\n\nMultiple security issues were discovered in MySQL and this update includes\nnew upstream MySQL versions to fix these issues. \n\nIn addition to security fixes, the updated packages contain bug fixes, new\nfeatures, and possibly incompatible changes. In general, a standard system update will make all the necessary\nchanges. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 7.74.0-1.3+deb11u5. This update also revises the fix for\nCVE-2022-27774 released in DSA-5197-1. \n\nWe recommend that you upgrade your curl packages",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32221"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "PACKETSTORM",
"id": "173569"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
}
],
"trust": 2.43
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-32221",
"trust": 4.1
},
{
"db": "HACKERONE",
"id": "1704017",
"trust": 2.5
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2023/05/17/4",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "170777",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169535",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169538",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU98195668",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-23-131-05",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170166",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.4030",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5421",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "170729",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170648",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-424148",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-32221",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "173569",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170697",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "PACKETSTORM",
"id": "173569"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"id": "VAR-202210-1888",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:58:55.307000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "HT213605",
"trust": 0.8,
"url": "https://lists.debian.org/debian-lts-announce/2023/01/msg00028.html"
},
{
"title": "curl Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=216855"
},
{
"title": "Ubuntu Security Notice: USN-5702-2: curl vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5702-2"
},
{
"title": "Ubuntu Security Notice: USN-5702-1: curl vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5702-1"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-32221"
},
{
"title": "IBM: Security Bulletin: The Community Edition of IBM ILOG CPLEX Optimization Studio is affected by multiple vulnerabilities in libcurl (CVE-2022-42915, CVE-2022-42916, CVE-2022-32221)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=93e8baf3e9bfd9ab92a05b44368ef244"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-668",
"trust": 1.1
},
{
"problemtype": "Leakage of resources to the wrong area (CWE-668) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.6,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 2.5,
"url": "http://seclists.org/fulldisclosure/2023/jan/19"
},
{
"trust": 2.5,
"url": "http://seclists.org/fulldisclosure/2023/jan/20"
},
{
"trust": 2.5,
"url": "https://hackerone.com/reports/1704017"
},
{
"trust": 2.4,
"url": "http://www.openwall.com/lists/oss-security/2023/05/17/4"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20230110-0006/"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20230208-0002/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213604"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213605"
},
{
"trust": 1.7,
"url": "https://www.debian.org/security/2023/dsa-5330"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2023/01/msg00028.html"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu98195668/"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-131-05"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-32221/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.4030"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-reuse-after-free-39731"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213604"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169538/ubuntu-security-notice-usn-5702-2.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169535/ubuntu-security-notice-usn-5702-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5421"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170166/red-hat-security-advisory-2022-8840-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170777/debian-security-advisory-5330-1.html"
},
{
"trust": 0.3,
"url": "https://ubuntu.com/security/notices/usn-5702-1"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5702-2"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4139"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.81.0-1ubuntu1.6"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.14"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.85.0-1ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23507"
},
{
"trust": 0.1,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23493"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23497"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23504"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23505"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23499"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23508"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213604."
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23502"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-8.0/8.0.32-0buntu0.20.04.1"
},
{
"trust": 0.1,
"url": "https://www.oracle.com/security-alerts/cpujan2023.html"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-8.0/8.0.32-0buntu0.22.10.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21877"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21881"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-8.0/8.0.32-0buntu0.22.04.1"
},
{
"trust": 0.1,
"url": "https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-32.html"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-5.7/5.7.41-0ubuntu0.18.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21871"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21867"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5823-1"
},
{
"trust": 0.1,
"url": "https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-41.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43552"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/curl"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "PACKETSTORM",
"id": "173569"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"db": "PACKETSTORM",
"id": "173569"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-12-05T00:00:00",
"db": "VULHUB",
"id": "VHN-424148"
},
{
"date": "2023-11-28T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"date": "2023-07-18T13:47:37",
"db": "PACKETSTORM",
"id": "173569"
},
{
"date": "2022-10-27T13:04:37",
"db": "PACKETSTORM",
"id": "169538"
},
{
"date": "2022-10-27T13:03:39",
"db": "PACKETSTORM",
"id": "169535"
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303"
},
{
"date": "2023-01-24T16:41:07",
"db": "PACKETSTORM",
"id": "170697"
},
{
"date": "2023-01-25T16:09:53",
"db": "PACKETSTORM",
"id": "170729"
},
{
"date": "2023-01-30T16:25:15",
"db": "PACKETSTORM",
"id": "170777"
},
{
"date": "2022-10-26T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"date": "2022-12-05T22:15:10.343000",
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-03-01T00:00:00",
"db": "VULHUB",
"id": "VHN-424148"
},
{
"date": "2023-11-28T06:56:00",
"db": "JVNDB",
"id": "JVNDB-2022-023343"
},
{
"date": "2023-07-19T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"date": "2024-03-27T15:00:28.423000",
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Haxx\u00a0 of \u00a0cURL\u00a0 Vulnerability related to resource leakage to the wrong area in products from other vendors",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023343"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
],
"trust": 0.6
}
}
VAR-202109-1789
Vulnerability from variot - Updated: 2024-07-23 21:48When curl >= 7.20.0 and <= 7.78.0 connects to an IMAP or POP3 server to retrieve data using STARTTLS to upgrade to TLS security, the server can respond and send back multiple responses at once that curl caches. curl would then upgrade to TLS but not flush the in-queue of cached responses but instead continue using and trustingthe responses it got before the TLS handshake as if they were authenticated.Using this flaw, it allows a Man-In-The-Middle attacker to first inject the fake responses, then pass-through the TLS traffic from the legitimate server and trick curl into sending data back to the user thinking the attacker's injected data comes from the TLS-protected server. A STARTTLS protocol injection flaw via man-in-the-middle was found in curl prior to 7.79.0. Such multiple "pipelined" responses are cached by curl. Over POP3 and IMAP an attacker can inject fake response data. Description:
Red Hat 3scale API Management delivers centralized API management features through a distributed, cloud-hosted layer. It includes built-in features to help in building a more successful API program, including access control, rate limits, payment gateway integration, and developer experience tools. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index
- Bugs fixed (https://bugzilla.redhat.com/):
1912487 - CVE-2020-26247 rubygem-nokogiri: XML external entity injection via Nokogiri::XML::Schema
- JIRA issues fixed (https://issues.jboss.org/):
THREESCALE-6868 - [3scale][2.11][LO-prio] Improve select default Application plan THREESCALE-6879 - [3scale][2.11][HI-prio] Add 'Create new Application' flow to Product > Applications index THREESCALE-7030 - Address scalability in 'Create new Application' form THREESCALE-7203 - Fix Zync resync command in 5.6.9. Creating equivalent Zync routes THREESCALE-7475 - Some api calls result in "Destroying user session" THREESCALE-7488 - Ability to add external Lua dependencies for custom policies THREESCALE-7573 - Enable proxy environment variables via the APICAST CRD THREESCALE-7605 - type change of "policies_config" in /admin/api/services/{service_id}/proxy.json THREESCALE-7633 - Signup form in developer portal is disabled for users authenticted via external SSO THREESCALE-7644 - Metrics: Service for 3scale operator is missing THREESCALE-7646 - Cleanup/refactor Products and Backends index logic THREESCALE-7648 - Remove "#context-menu" from the url THREESCALE-7704 - Images based on RHEL 7 should contain at least ca-certificates-2021.2.50-72.el7_9.noarch.rpm THREESCALE-7731 - Reenable operator metrics service for apicast-operator THREESCALE-7761 - 3scale Operator doesn't respect *_proxy env vars THREESCALE-7765 - Remove MessageBus from System THREESCALE-7834 - admin can't create application when developer is not allowed to pick a plan THREESCALE-7863 - Update some Obsolete API's in 3scale_v2.js THREESCALE-7884 - Service top application endpoint is not working properly THREESCALE-7912 - ServiceMonitor created by monitoring showing HTTP 400 error THREESCALE-7913 - ServiceMonitor for 3scale operator has wide selector
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . ========================================================================== Ubuntu Security Notice USN-5079-3 September 21, 2021
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 18.04 LTS
Summary:
USN-5079-1 introduced a regression in curl.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
USN-5079-1 fixed vulnerabilities in curl. One of the fixes introduced a regression on Ubuntu 18.04 LTS. This update fixes the problem.
We apologize for the inconvenience.
Original advisory details:
It was discovered that curl incorrect handled memory when sending data to an MQTT server. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2021-22945) Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946) Patrick Monnerat discovered that curl incorrectly handled responses received before STARTTLS. A remote attacker could possibly use this issue to inject responses and intercept communications. (CVE-2021-22947)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.16 libcurl3-gnutls 7.58.0-2ubuntu3.16 libcurl3-nss 7.58.0-2ubuntu3.16 libcurl4 7.58.0-2ubuntu3.16
In general, a standard system update will make all the necessary changes. Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Summary:
An update is now available for OpenShift Logging 5.1.4. Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1858 - OpenShift Alerting Rules Style-Guide Compliance LOG-1917 - [release-5.1] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method
-
CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack
-
CVE-2021-32627: redis: Integer overflow issue with Streams
-
CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure
-
CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser
-
CVE-2021-32675: redis: Denial of service via Redis Standard Protocol (RESP) request
-
CVE-2021-32687: redis: Integer overflow issue with intsets
-
CVE-2021-32690: helm: information disclosure vulnerability
-
CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite
-
CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite
-
CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name
-
CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow
-
CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings
-
CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim function
-
CVE-2021-41099: redis: Integer overflow issue with strings
Bug fixes:
- RFE ACM Application management UI doesn't reflect object status (Bugzilla
1965321)
-
RHACM 2.4 files (Bugzilla #1983663)
-
Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 (Bugzilla #1993366)
-
submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla
1994668)
-
ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb (Bugzilla #2000274)
-
pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2003915)
-
InfraEnv condition does not reflect the actual error message (Bugzilla
2009204, 2010030)
-
Flaky test point to a nil pointer conditions list (Bugzilla #2010175)
-
InfraEnv status shows 'Failed to create image: internal error (Bugzilla
2010272)
- subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla
2013157)
-
pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2014084)
-
Bugs fixed (https://bugzilla.redhat.com/):
1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1965321 - RFE ACM Application management UI doesn't reflect object status 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1983663 - RHACM 2.4.0 images 1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite 1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite 1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot 1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow 1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb 2003915 - pre-network-manager-config failed due to timeout when static config is used 2009204 - InfraEnv condition does not reflect the actual error message 2010030 - InfraEnv condition does not reflect the actual error message 2010175 - Flaky test point to a nil pointer conditions list 2010272 - InfraEnv status shows 'Failed to create image: internal error 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings 2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks 2014084 - pre-network-manager-config failed due to timeout when static config is used
- Bugs fixed (https://bugzilla.redhat.com/):
1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic 2016256 - Release of OpenShift Serverless Eventing 1.19.0 2016258 - Release of OpenShift Serverless Serving 1.19.0
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202109-1789",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "communications cloud native core network slice selection function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.8.0"
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.3"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.59"
},
{
"model": "sinec infrastructure network services",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0.1.1"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.2"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.20.0"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.11.0"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.10.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.79.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.0"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"model": "communications cloud native core console",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "commerce guided search",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.3.2"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "communications cloud native core service communication proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "communications cloud native core security edge protection proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.35"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.1"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.79.0",
"versionStartIncluding": "7.20.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:solidfire_baseboard_management_controller:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.26",
"versionStartIncluding": "8.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "5.7.35",
"versionStartIncluding": "5.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_slice_selection_function:1.8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:1.15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_function_cloud_native_environment:1.10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_service_communication_proxy:1.15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:1.15.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:1.11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0.1.1",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:commerce_guided_search:11.3.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_security_edge_protection_proxy:22.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_console:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.1.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "164948"
},
{
"db": "PACKETSTORM",
"id": "165053"
}
],
"trust": 0.7
},
"cve": "CVE-2021-22947",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-381421",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.2,
"impactScore": 3.6,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-22947",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-381421",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "When curl \u003e= 7.20.0 and \u003c= 7.78.0 connects to an IMAP or POP3 server to retrieve data using STARTTLS to upgrade to TLS security, the server can respond and send back multiple responses at once that curl caches. curl would then upgrade to TLS but not flush the in-queue of cached responses but instead continue using and trustingthe responses it got *before* the TLS handshake as if they were authenticated.Using this flaw, it allows a Man-In-The-Middle attacker to first inject the fake responses, then pass-through the TLS traffic from the legitimate server and trick curl into sending data back to the user thinking the attacker\u0027s injected data comes from the TLS-protected server. A STARTTLS protocol injection flaw via man-in-the-middle was found in curl prior to 7.79.0. Such multiple \"pipelined\" responses are cached by curl. \nOver POP3 and IMAP an attacker can inject fake response data. Description:\n\nRed Hat 3scale API Management delivers centralized API management features\nthrough a distributed, cloud-hosted layer. It includes built-in features to\nhelp in building a more successful API program, including access control,\nrate limits, payment gateway integration, and developer experience tools. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1912487 - CVE-2020-26247 rubygem-nokogiri: XML external entity injection via Nokogiri::XML::Schema\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nTHREESCALE-6868 - [3scale][2.11][LO-prio] Improve select default Application plan\nTHREESCALE-6879 - [3scale][2.11][HI-prio] Add \u0027Create new Application\u0027 flow to Product \u003e Applications index\nTHREESCALE-7030 - Address scalability in \u0027Create new Application\u0027 form\nTHREESCALE-7203 - Fix Zync resync command in 5.6.9. Creating equivalent Zync routes\nTHREESCALE-7475 - Some api calls result in \"Destroying user session\"\nTHREESCALE-7488 - Ability to add external Lua dependencies for custom policies\nTHREESCALE-7573 - Enable proxy environment variables via the APICAST CRD\nTHREESCALE-7605 - type change of \"policies_config\" in /admin/api/services/{service_id}/proxy.json\nTHREESCALE-7633 - Signup form in developer portal is disabled for users authenticted via external SSO\nTHREESCALE-7644 - Metrics: Service for 3scale operator is missing\nTHREESCALE-7646 - Cleanup/refactor Products and Backends index logic\nTHREESCALE-7648 - Remove \"#context-menu\" from the url\nTHREESCALE-7704 - Images based on RHEL 7 should contain at least ca-certificates-2021.2.50-72.el7_9.noarch.rpm\nTHREESCALE-7731 - Reenable operator metrics service for apicast-operator\nTHREESCALE-7761 - 3scale Operator doesn\u0027t respect *_proxy env vars\nTHREESCALE-7765 - Remove MessageBus from System\nTHREESCALE-7834 - admin can\u0027t create application when developer is not allowed to pick a plan\nTHREESCALE-7863 - Update some Obsolete API\u0027s in 3scale_v2.js\nTHREESCALE-7884 - Service top application endpoint is not working properly\nTHREESCALE-7912 - ServiceMonitor created by monitoring showing HTTP 400 error\nTHREESCALE-7913 - ServiceMonitor for 3scale operator has wide selector\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. ==========================================================================\nUbuntu Security Notice USN-5079-3\nSeptember 21, 2021\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.04 LTS\n\nSummary:\n\nUSN-5079-1 introduced a regression in curl. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nUSN-5079-1 fixed vulnerabilities in curl. One of the fixes introduced a\nregression on Ubuntu 18.04 LTS. This update fixes the problem. \n\nWe apologize for the inconvenience. \n\nOriginal advisory details:\n\n It was discovered that curl incorrect handled memory when sending data to\n an MQTT server. A remote attacker could use this issue to cause curl to\n crash, resulting in a denial of service, or possibly execute arbitrary\n code. (CVE-2021-22945)\n Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946)\n Patrick Monnerat discovered that curl incorrectly handled responses\n received before STARTTLS. A remote attacker could possibly use this issue\n to inject responses and intercept communications. (CVE-2021-22947)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.16\n libcurl3-gnutls 7.58.0-2ubuntu3.16\n libcurl3-nss 7.58.0-2ubuntu3.16\n libcurl4 7.58.0-2ubuntu3.16\n\nIn general, a standard system update will make all the necessary changes. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Summary:\n\nAn update is now available for OpenShift Logging 5.1.4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1858 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1917 - [release-5.1] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\n\n6. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method\n\n* CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack\n\n* CVE-2021-32627: redis: Integer overflow issue with Streams\n\n* CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure\n\n* CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser\n\n* CVE-2021-32675: redis: Denial of service via Redis Standard Protocol\n(RESP) request\n\n* CVE-2021-32687: redis: Integer overflow issue with intsets\n\n* CVE-2021-32690: helm: information disclosure vulnerability\n\n* CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing\narbitrary file creation and overwrite\n\n* CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization\nallowing arbitrary file creation and overwrite\n\n* CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are\nfollowed by a pointer to a root domain name\n\n* CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow\n\n* CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings\n\n* CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim\nfunction\n\n* CVE-2021-41099: redis: Integer overflow issue with strings\n\nBug fixes:\n\n* RFE ACM Application management UI doesn\u0027t reflect object status (Bugzilla\n#1965321)\n\n* RHACM 2.4 files (Bugzilla #1983663)\n\n* Hive Operator CrashLoopBackOff when deploying ACM with latest downstream\n2.4 (Bugzilla #1993366)\n\n* submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla\n#1994668)\n\n* ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to\nmulticluster pod in clb (Bugzilla #2000274)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2003915)\n\n* InfraEnv condition does not reflect the actual error message (Bugzilla\n#2009204, 2010030)\n\n* Flaky test point to a nil pointer conditions list (Bugzilla #2010175)\n\n* InfraEnv status shows \u0027Failed to create image: internal error (Bugzilla\n#2010272)\n\n* subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla\n#2013157)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2014084)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1965321 - RFE ACM Application management UI doesn\u0027t reflect object status\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1983663 - RHACM 2.4.0 images\n1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite\n1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite\n1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4\n1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot\n1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow\n1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb\n2003915 - pre-network-manager-config failed due to timeout when static config is used\n2009204 - InfraEnv condition does not reflect the actual error message\n2010030 - InfraEnv condition does not reflect the actual error message\n2010175 - Flaky test point to a nil pointer conditions list\n2010272 - InfraEnv status shows \u0027Failed to create image: internal error\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks\n2014084 - pre-network-manager-config failed due to timeout when static config is used\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n2016256 - Release of OpenShift Serverless Eventing 1.19.0\n2016258 - Release of OpenShift Serverless Serving 1.19.0\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22947"
},
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "164948"
},
{
"db": "PACKETSTORM",
"id": "165053"
}
],
"trust": 1.8
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-22947",
"trust": 2.0
},
{
"db": "SIEMENS",
"id": "SSA-389290",
"trust": 1.1
},
{
"db": "HACKERONE",
"id": "1334763",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "165053",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165337",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "164993",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "164948",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164740",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166319",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166112",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-381421",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22947",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166279",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164220",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "164948"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"id": "VAR-202109-1789",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:48:32.869000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-22947 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22947"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-345",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20211029-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213183"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/mar/29"
},
{
"trust": 1.1,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.1,
"url": "https://hackerone.com/reports/1334763"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2021/09/msg00022.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36385"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-0512"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q3/168"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26247"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26247"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13050"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9802"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30762"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9895"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3899"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8819"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3867"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9893"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8808"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3902"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3900"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30761"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3537"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9805"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8820"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3449"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9850"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27781"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8811"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0055"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9803"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3885"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15503"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10018"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25660"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8835"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8844"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3865"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1730"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3864"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3520"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21684"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14391"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0056"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3901"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39226"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3518"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-1000858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3895"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11793"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20454"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0532"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9894"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8816"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14889"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8815"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9952"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3517"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20305"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3868"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8846"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3894"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25677"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-3"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.16"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/bugs/1944120"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4628"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32803"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32626"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4618"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32804"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33623"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41099"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32804"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32672"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3711"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33623"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23017"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32687"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32803"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36221"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "164948"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "164948"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-09-29T00:00:00",
"db": "VULHUB",
"id": "VHN-381421"
},
{
"date": "2021-12-17T14:04:30",
"db": "PACKETSTORM",
"id": "165337"
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631"
},
{
"date": "2022-03-11T16:38:38",
"db": "PACKETSTORM",
"id": "166279"
},
{
"date": "2021-09-21T15:39:10",
"db": "PACKETSTORM",
"id": "164220"
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099"
},
{
"date": "2021-11-17T15:07:42",
"db": "PACKETSTORM",
"id": "164993"
},
{
"date": "2021-11-12T17:01:04",
"db": "PACKETSTORM",
"id": "164948"
},
{
"date": "2021-11-23T17:10:05",
"db": "PACKETSTORM",
"id": "165053"
},
{
"date": "2021-09-29T20:15:08.253000",
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381421"
},
{
"date": "2024-03-27T15:03:30.377000",
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "164220"
}
],
"trust": 0.1
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2021-5191-02",
"sources": [
{
"db": "PACKETSTORM",
"id": "165337"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
}
],
"trust": 0.2
}
}
VAR-202203-0043
Vulnerability from variot - Updated: 2024-07-23 21:45A flaw was found in the way the "flags" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. Linux Kernel Has an initialization vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Summary:
The Migration Toolkit for Containers (MTC) 1.5.4 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.8 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security updates:
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
nodejs-shelljs: improper privilege management (CVE-2022-0144)
-
follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
Bug fix:
-
RHACM 2.3.8 images (Bugzilla #2062316)
-
Bugs fixed (https://bugzilla.redhat.com/):
2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2062316 - RHACM 2.3.8 images
-
8.1) - aarch64, noarch, ppc64le, s390x, x86_64
-
Description:
The kernel packages contain the Linux kernel, the core of any Linux operating system. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel-rt security and bug fix update Advisory ID: RHSA-2022:0819-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:0819 Issue date: 2022-03-10 CVE Names: CVE-2021-0920 CVE-2021-4154 CVE-2022-0330 CVE-2022-0435 CVE-2022-0492 CVE-2022-0847 CVE-2022-22942 =====================================================================
- Summary:
An update for kernel-rt is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64 Red Hat Enterprise Linux for Real Time (v. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: improper initialization of the "flags" member of the new pipe_buffer (CVE-2022-0847)
-
kernel: Use After Free in unix_gc() which could result in a local privilege escalation (CVE-2021-0920)
-
kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout (CVE-2021-4154)
-
kernel: possible privileges escalation due to missing TLB flush (CVE-2022-0330)
-
kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS (CVE-2022-0435)
-
kernel: cgroups v1 release_agent feature may allow privilege escalation (CVE-2022-0492)
-
kernel: failing usercopy allows for use-after-free exploitation (CVE-2022-22942)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
kernel symbol '__rt_mutex_init' is exported GPL-only in kernel 4.18.0-348.2.1.rt7.132.el8_5 (BZ#2038423)
-
kernel-rt: update RT source tree to the RHEL-8.5.z3 source tree (BZ#2045589)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
2031930 - CVE-2021-0920 kernel: Use After Free in unix_gc() which could result in a local privilege escalation 2034514 - CVE-2021-4154 kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout 2042404 - CVE-2022-0330 kernel: possible privileges escalation due to missing TLB flush 2044809 - CVE-2022-22942 kernel: failing usercopy allows for use-after-free exploitation 2048738 - CVE-2022-0435 kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS 2051505 - CVE-2022-0492 kernel: cgroups v1 release_agent feature may allow privilege escalation 2060795 - CVE-2022-0847 kernel: improper initialization of the "flags" member of the new pipe_buffer
- Package List:
Red Hat Enterprise Linux Real Time for NFV (v. 8):
Source: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm
x86_64: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm
Red Hat Enterprise Linux for Real Time (v. 8):
Source: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm
x86_64: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-0920 https://access.redhat.com/security/cve/CVE-2021-4154 https://access.redhat.com/security/cve/CVE-2022-0330 https://access.redhat.com/security/cve/CVE-2022-0435 https://access.redhat.com/security/cve/CVE-2022-0492 https://access.redhat.com/security/cve/CVE-2022-0847 https://access.redhat.com/security/cve/CVE-2022-22942 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/security/vulnerabilities/RHSB-2022-002
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYippFNzjgjWX9erEAQhDwRAAjsGfW6qXFI81H8xov/wQnw/PdsUOhzDl ISzJEeXALEQCloLH+UDcgo/wV1es00USfBo1H/SpDc5ahjBWP2pbo8QtIRKT6h/k ord4KsAMGjqWRI+zaGbaFoL0q4okMG9H6r731TnhX06CaLXLui8iUJrQLziHo02t /AihF9dW30/w4tXyKeMc73D1lKHImQQFfJo5xpIo8Mm7+6GFrkne8Z46SKXjjyfG IODAcU3wA0C93bbtR4EHEbenVyVVaE5Phn40vxxF00+AQTHoc5nYpOJbDLI3bi1F GbEKQ5pf0jkScwlfEHtHkmjPk92PA/wV41BhPoJw8oKshH4RRxml4Ps0KldI4NrQ ypmDLZ3CfJ+saFbNLN5BARCiqJavF5A4yszHZ5QuopmC1RJx6/rAuE79KkeB0JvW IOaXPzzc05dCqdyVBvNAu+XpVlTbe+XGBR0LalYYjYWxQSrEYAYQ005mcvEWOPRm QfPSM7eOaAzo9RGrMirTm0Gz9BJ0TbvNGiMmMTpLdb6akx1BQcQ5bpAjUCQN0O7j KIFri0FxflweqZswTchfdbW74VuUyTVaeFYKGhp5hFPV6lFkDUFEFC71ANvPaewE X1Z5Ae0gFMD8w5m5eePHqYuEaL6NHtYctHlBh0ef6mrvsKq9lmxJpdXrZUO+eP4w nEhPbkKSmMY= =CLN6 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202203-0043",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "codeready linux builder",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.8"
},
{
"model": "enterprise linux server for power little endian update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.1"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.16.11"
},
{
"model": "ovirt-engine",
"scope": "eq",
"trust": 1.0,
"vendor": "ovirt",
"version": "4.4.10.2"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for ibm z systems eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux server update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.1"
},
{
"model": "enterprise linux for ibm z systems",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "sma1000",
"scope": "lte",
"trust": 1.0,
"vendor": "sonicwall",
"version": "12.4.2-02044"
},
{
"model": "enterprise linux for power little endian eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux server for power little endian update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "scalance lpe9403",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "2.0"
},
{
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux server update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux for power little endian",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.15"
},
{
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for ibm z systems eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.15.25"
},
{
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux for power little endian eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux server for power little endian update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.16"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.10.102"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "virtualization host",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"model": "enterprise linux server update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": null,
"trust": 0.8,
"vendor": "fedora",
"version": null
},
{
"model": "sma1000",
"scope": null,
"trust": 0.8,
"vendor": "sonicwall",
"version": null
},
{
"model": "red hat enterprise linux eus",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "ovirt-engine",
"scope": null,
"trust": 0.8,
"vendor": "ovirt",
"version": null
},
{
"model": "red hat enterprise linux for ibm z systems - extended update support",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"model": "red hat enterprise linux for ibm z systems",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"model": "red hat enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"model": "scalance lpe9403",
"scope": null,
"trust": 0.8,
"vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.16.11",
"versionStartIncluding": "5.16",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.15.25",
"versionStartIncluding": "5.15",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.10.102",
"versionStartIncluding": "5.8",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems_eus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems_eus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:redhat:codeready_linux_builder:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:redhat:virtualization_host:4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:ovirt:ovirt-engine:4.4.10.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_lpe9403_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "2.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_lpe9403:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:sonicwall:sma1000_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "12.4.2-02044",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:sonicwall:sma1000:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "166789"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166282"
},
{
"db": "PACKETSTORM",
"id": "166281"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
}
],
"trust": 0.7
},
"cve": "CVE-2022-0847",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "HIGH",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Local",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Complete",
"baseScore": 7.2,
"confidentialityImpact": "Complete",
"exploitabilityScore": null,
"id": "CVE-2022-0847",
"impactScore": null,
"integrityImpact": "Complete",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "High",
"trust": 0.9,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.8,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2022-0847",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-0847",
"trust": 1.8,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202203-522",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULMON",
"id": "CVE-2022-0847",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "A flaw was found in the way the \"flags\" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. Linux Kernel Has an initialization vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.8 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* nodejs-shelljs: improper privilege management (CVE-2022-0144)\n\n* follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\nBug fix:\n\n* RHACM 2.3.8 images (Bugzilla #2062316)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2062316 - RHACM 2.3.8 images\n\n5. 8.1) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel-rt security and bug fix update\nAdvisory ID: RHSA-2022:0819-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0819\nIssue date: 2022-03-10\nCVE Names: CVE-2021-0920 CVE-2021-4154 CVE-2022-0330 \n CVE-2022-0435 CVE-2022-0492 CVE-2022-0847 \n CVE-2022-22942 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel-rt is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64\nRed Hat Enterprise Linux for Real Time (v. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: improper initialization of the \"flags\" member of the new\npipe_buffer (CVE-2022-0847)\n\n* kernel: Use After Free in unix_gc() which could result in a local\nprivilege escalation (CVE-2021-0920)\n\n* kernel: local privilege escalation by exploiting the fsconfig syscall\nparameter leads to container breakout (CVE-2021-4154)\n\n* kernel: possible privileges escalation due to missing TLB flush\n(CVE-2022-0330)\n\n* kernel: remote stack overflow via kernel panic on systems using TIPC may\nlead to DoS (CVE-2022-0435)\n\n* kernel: cgroups v1 release_agent feature may allow privilege escalation\n(CVE-2022-0492)\n\n* kernel: failing usercopy allows for use-after-free exploitation\n(CVE-2022-22942)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* kernel symbol \u0027__rt_mutex_init\u0027 is exported GPL-only in kernel\n4.18.0-348.2.1.rt7.132.el8_5 (BZ#2038423)\n\n* kernel-rt: update RT source tree to the RHEL-8.5.z3 source tree\n(BZ#2045589)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031930 - CVE-2021-0920 kernel: Use After Free in unix_gc() which could result in a local privilege escalation\n2034514 - CVE-2021-4154 kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout\n2042404 - CVE-2022-0330 kernel: possible privileges escalation due to missing TLB flush\n2044809 - CVE-2022-22942 kernel: failing usercopy allows for use-after-free exploitation\n2048738 - CVE-2022-0435 kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS\n2051505 - CVE-2022-0492 kernel: cgroups v1 release_agent feature may allow privilege escalation\n2060795 - CVE-2022-0847 kernel: improper initialization of the \"flags\" member of the new pipe_buffer\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8):\n\nSource:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\n\nRed Hat Enterprise Linux for Real Time (v. 8):\n\nSource:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-0920\nhttps://access.redhat.com/security/cve/CVE-2021-4154\nhttps://access.redhat.com/security/cve/CVE-2022-0330\nhttps://access.redhat.com/security/cve/CVE-2022-0435\nhttps://access.redhat.com/security/cve/CVE-2022-0492\nhttps://access.redhat.com/security/cve/CVE-2022-0847\nhttps://access.redhat.com/security/cve/CVE-2022-22942\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/security/vulnerabilities/RHSB-2022-002\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYippFNzjgjWX9erEAQhDwRAAjsGfW6qXFI81H8xov/wQnw/PdsUOhzDl\nISzJEeXALEQCloLH+UDcgo/wV1es00USfBo1H/SpDc5ahjBWP2pbo8QtIRKT6h/k\nord4KsAMGjqWRI+zaGbaFoL0q4okMG9H6r731TnhX06CaLXLui8iUJrQLziHo02t\n/AihF9dW30/w4tXyKeMc73D1lKHImQQFfJo5xpIo8Mm7+6GFrkne8Z46SKXjjyfG\nIODAcU3wA0C93bbtR4EHEbenVyVVaE5Phn40vxxF00+AQTHoc5nYpOJbDLI3bi1F\nGbEKQ5pf0jkScwlfEHtHkmjPk92PA/wV41BhPoJw8oKshH4RRxml4Ps0KldI4NrQ\nypmDLZ3CfJ+saFbNLN5BARCiqJavF5A4yszHZ5QuopmC1RJx6/rAuE79KkeB0JvW\nIOaXPzzc05dCqdyVBvNAu+XpVlTbe+XGBR0LalYYjYWxQSrEYAYQ005mcvEWOPRm\nQfPSM7eOaAzo9RGrMirTm0Gz9BJ0TbvNGiMmMTpLdb6akx1BQcQ5bpAjUCQN0O7j\nKIFri0FxflweqZswTchfdbW74VuUyTVaeFYKGhp5hFPV6lFkDUFEFC71ANvPaewE\nX1Z5Ae0gFMD8w5m5eePHqYuEaL6NHtYctHlBh0ef6mrvsKq9lmxJpdXrZUO+eP4w\nnEhPbkKSmMY=\n=CLN6\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-0847"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "PACKETSTORM",
"id": "166789"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166282"
},
{
"db": "PACKETSTORM",
"id": "166281"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
}
],
"trust": 2.34
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-0847",
"trust": 4.0
},
{
"db": "PACKETSTORM",
"id": "166230",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "166258",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "166229",
"trust": 2.4
},
{
"db": "SIEMENS",
"id": "SSA-222547",
"trust": 1.6
},
{
"db": "ICS CERT",
"id": "ICSA-22-167-09",
"trust": 1.4
},
{
"db": "PACKETSTORM",
"id": "176534",
"trust": 1.0
},
{
"db": "JVN",
"id": "JVNVU99030761",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166516",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166280",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166305",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166812",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166241",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166569",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032843",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031421",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022030808",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042576",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031308",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031036",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1027",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0965",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2981",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1677",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1405",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1064",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0944",
"trust": 0.6
},
{
"db": "CXSECURITY",
"id": "WLB-2022030042",
"trust": 0.6
},
{
"db": "CXSECURITY",
"id": "WLB-2022030060",
"trust": 0.6
},
{
"db": "EXPLOIT-DB",
"id": "50808",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2022-0847",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166789",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166282",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166281",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166265",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166264",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "PACKETSTORM",
"id": "166789"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166282"
},
{
"db": "PACKETSTORM",
"id": "166281"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"id": "VAR-202203-0043",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.21111111
},
"last_update_date": "2024-07-23T21:45:03.589000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Bug\u00a02060795",
"trust": 0.8,
"url": "https://fedoraproject.org/"
},
{
"title": "Linux kernel Security vulnerabilities",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=184957"
},
{
"title": "Red Hat: Important: kernel-rt security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220822 - security advisory"
},
{
"title": "Red Hat: Important: kernel security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220831 - security advisory"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-0847"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2022-0847"
},
{
"title": "Dirty-Pipe-Oneshot",
"trust": 0.1,
"url": "https://github.com/badboy-sft/dirty-pipe-oneshot "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-665",
"trust": 1.0
},
{
"problemtype": "Improper initialization (CWE-665) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 3.0,
"url": "http://packetstormsecurity.com/files/166229/dirty-pipe-linux-privilege-escalation.html"
},
{
"trust": 3.0,
"url": "http://packetstormsecurity.com/files/166258/dirty-pipe-local-privilege-escalation.html"
},
{
"trust": 2.4,
"url": "http://packetstormsecurity.com/files/166230/dirty-pipe-suid-binary-hijack-privilege-escalation.html"
},
{
"trust": 1.6,
"url": "https://dirtypipe.cm4all.com/"
},
{
"trust": 1.6,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-222547.pdf"
},
{
"trust": 1.6,
"url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2022-0015"
},
{
"trust": 1.6,
"url": "https://www.suse.com/support/kb/doc/?id=000020603"
},
{
"trust": 1.6,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2060795"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20220325-0005/"
},
{
"trust": 1.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847"
},
{
"trust": 1.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0847"
},
{
"trust": 1.0,
"url": "http://packetstormsecurity.com/files/176534/linux-4.20-ktls-read-only-write.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99030761/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-22-167-09"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/issue/wlb-2022030060"
},
{
"trust": 0.6,
"url": "https://www.exploit-db.com/exploits/50808"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/issue/wlb-2022030042"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166305/red-hat-security-advisory-2022-0841-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031308"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166516/red-hat-security-advisory-2022-1083-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032843"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166241/ubuntu-security-notice-usn-5317-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1405"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031036"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166280/red-hat-security-advisory-2022-0822-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1027"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022030808"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1064"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-167-09"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042576"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166569/ubuntu-security-notice-usn-5362-1.html"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-0847/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166812/red-hat-security-advisory-2022-1476-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-file-write-via-dirty-pipe-37724"
},
{
"trust": 0.6,
"url": "https://source.android.com/security/bulletin/2022-05-01"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0944"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2981"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0965"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031421"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-22942"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-0330"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2022-002"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-0920"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4154"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0435"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22942"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25315"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25236"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25235"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22822"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22823"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0392"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0261"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22826"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3999"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0413"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23219"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22824"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45960"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22825"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-46143"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0516"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0359"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0318"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21684"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4122"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22817"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1396"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0532"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22816"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21684"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0155"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1083"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0144"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0822"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0821"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4028"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0823"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0831"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0819"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "PACKETSTORM",
"id": "166789"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166282"
},
{
"db": "PACKETSTORM",
"id": "166281"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "PACKETSTORM",
"id": "166789"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166282"
},
{
"db": "PACKETSTORM",
"id": "166281"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-03-10T00:00:00",
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"date": "2023-07-12T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"date": "2022-04-20T15:12:33",
"db": "PACKETSTORM",
"id": "166789"
},
{
"date": "2022-03-29T15:53:19",
"db": "PACKETSTORM",
"id": "166516"
},
{
"date": "2022-03-11T16:38:56",
"db": "PACKETSTORM",
"id": "166280"
},
{
"date": "2022-03-11T16:39:27",
"db": "PACKETSTORM",
"id": "166282"
},
{
"date": "2022-03-11T16:39:13",
"db": "PACKETSTORM",
"id": "166281"
},
{
"date": "2022-03-11T16:31:15",
"db": "PACKETSTORM",
"id": "166265"
},
{
"date": "2022-03-11T16:31:02",
"db": "PACKETSTORM",
"id": "166264"
},
{
"date": "2022-03-07T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"date": "2022-03-10T17:44:57.283000",
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2024-01-12T00:00:00",
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"date": "2023-07-12T06:29:00",
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"date": "2022-08-11T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"date": "2024-07-02T17:05:01.307000",
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Linux\u00a0Kernel\u00a0 Initialization vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
}
],
"trust": 0.6
}
}
VAR-202208-2263
Vulnerability from variot - Updated: 2024-07-23 21:44When curl is used to retrieve and parse cookies from a HTTP(S) server, itaccepts cookies using control codes that when later are sent back to a HTTPserver might make the server return 400 responses. Effectively allowing a"sister site" to deny service to all siblings. Haxx of cURL Products from other vendors have unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. A security vulnerability exists in curl versions 4.9 through 7.84. ========================================================================== Ubuntu Security Notice USN-5587-1 September 01, 2022
curl vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
curl could be denied access to a HTTP(S) content if it recieved a specially crafted cookie.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
Axel Chong discovered that when curl accepted and sent back cookies containing control bytes that a HTTP(S) server might return a 400 (Bad Request Error) response. A malicious cookie host could possibly use this to cause denial-of-service.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: curl 7.81.0-1ubuntu1.4 libcurl3-gnutls 7.81.0-1ubuntu1.4 libcurl3-nss 7.81.0-1ubuntu1.4 libcurl4 7.81.0-1ubuntu1.4
Ubuntu 20.04 LTS: curl 7.68.0-1ubuntu2.13 libcurl3-gnutls 7.68.0-1ubuntu2.13 libcurl3-nss 7.68.0-1ubuntu2.13 libcurl4 7.68.0-1ubuntu2.13
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.20 libcurl3-gnutls 7.58.0-2ubuntu3.20 libcurl3-nss 7.58.0-2ubuntu3.20 libcurl4 7.58.0-2ubuntu3.20
Ubuntu 16.04 ESM: curl 7.47.0-1ubuntu2.19+esm5 libcurl3 7.47.0-1ubuntu2.19+esm5 libcurl3-gnutls 7.47.0-1ubuntu2.19+esm5 libcurl3-nss 7.47.0-1ubuntu2.19+esm5
Ubuntu 14.04 ESM: curl 7.35.0-1ubuntu2.20+esm12 libcurl3 7.35.0-1ubuntu2.20+esm12 libcurl3-gnutls 7.35.0-1ubuntu2.20+esm12 libcurl3-nss 7.35.0-1ubuntu2.20+esm12
In general, a standard system update will make all the necessary changes. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.6.6 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/
Security Fix(es): * CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command * CVE-2023-32314 vm2: Sandbox Escape * CVE-2023-32313 vm2: Inspect Manipulation
- Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation for details on how to install the images:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/install/installing#installing-while-connected-online
- Bugs fixed (https://bugzilla.redhat.com/):
2187525 - CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command 2208376 - CVE-2023-32314 vm2: Sandbox Escape 2208377 - CVE-2023-32313 vm2: Inspect Manipulation
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Low: curl security update Advisory ID: RHSA-2023:2478-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:2478 Issue date: 2023-05-09 CVE Names: CVE-2022-35252 CVE-2022-43552 ==================================================================== 1. Summary:
An update for curl is now available for Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Low. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: Incorrect handling of control code characters in cookies (CVE-2022-35252)
-
curl: Use-after-free triggered by an HTTP proxy deny response (CVE-2022-43552)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.2 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2120718 - CVE-2022-35252 curl: Incorrect handling of control code characters in cookies 2152652 - CVE-2022-43552 curl: Use-after-free triggered by an HTTP proxy deny response
- Package List:
Red Hat Enterprise Linux AppStream (v. 9):
aarch64: curl-debuginfo-7.76.1-23.el9.aarch64.rpm curl-debugsource-7.76.1-23.el9.aarch64.rpm curl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-devel-7.76.1-23.el9.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm
ppc64le: curl-debuginfo-7.76.1-23.el9.ppc64le.rpm curl-debugsource-7.76.1-23.el9.ppc64le.rpm curl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-devel-7.76.1-23.el9.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm
s390x: curl-debuginfo-7.76.1-23.el9.s390x.rpm curl-debugsource-7.76.1-23.el9.s390x.rpm curl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-devel-7.76.1-23.el9.s390x.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm
x86_64: curl-debuginfo-7.76.1-23.el9.i686.rpm curl-debuginfo-7.76.1-23.el9.x86_64.rpm curl-debugsource-7.76.1-23.el9.i686.rpm curl-debugsource-7.76.1-23.el9.x86_64.rpm curl-minimal-debuginfo-7.76.1-23.el9.i686.rpm curl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-debuginfo-7.76.1-23.el9.i686.rpm libcurl-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-devel-7.76.1-23.el9.i686.rpm libcurl-devel-7.76.1-23.el9.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 9):
Source: curl-7.76.1-23.el9.src.rpm
aarch64: curl-7.76.1-23.el9.aarch64.rpm curl-debuginfo-7.76.1-23.el9.aarch64.rpm curl-debugsource-7.76.1-23.el9.aarch64.rpm curl-minimal-7.76.1-23.el9.aarch64.rpm curl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-7.76.1-23.el9.aarch64.rpm libcurl-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-minimal-7.76.1-23.el9.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm
ppc64le: curl-7.76.1-23.el9.ppc64le.rpm curl-debuginfo-7.76.1-23.el9.ppc64le.rpm curl-debugsource-7.76.1-23.el9.ppc64le.rpm curl-minimal-7.76.1-23.el9.ppc64le.rpm curl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-7.76.1-23.el9.ppc64le.rpm libcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-minimal-7.76.1-23.el9.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm
s390x: curl-7.76.1-23.el9.s390x.rpm curl-debuginfo-7.76.1-23.el9.s390x.rpm curl-debugsource-7.76.1-23.el9.s390x.rpm curl-minimal-7.76.1-23.el9.s390x.rpm curl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-7.76.1-23.el9.s390x.rpm libcurl-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-minimal-7.76.1-23.el9.s390x.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm
x86_64: curl-7.76.1-23.el9.x86_64.rpm curl-debuginfo-7.76.1-23.el9.i686.rpm curl-debuginfo-7.76.1-23.el9.x86_64.rpm curl-debugsource-7.76.1-23.el9.i686.rpm curl-debugsource-7.76.1-23.el9.x86_64.rpm curl-minimal-7.76.1-23.el9.x86_64.rpm curl-minimal-debuginfo-7.76.1-23.el9.i686.rpm curl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-7.76.1-23.el9.i686.rpm libcurl-7.76.1-23.el9.x86_64.rpm libcurl-debuginfo-7.76.1-23.el9.i686.rpm libcurl-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-minimal-7.76.1-23.el9.i686.rpm libcurl-minimal-7.76.1-23.el9.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-35252 https://access.redhat.com/security/cve/CVE-2022-43552 https://access.redhat.com/security/updates/classification/#low https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBZFo0V9zjgjWX9erEAQhmTw/9FUwLCGRKCmddNVTMAaay54EPggJFOPKx nN06YIqiK5arkX4SD58YZrX9J0gUZcwGs6s5WO35pG3F+qJXhe8E8fbzavqRG5NB oxG+pDC5+6xQxK41tkuLYJoUhF1w4yG8SuMSzroLcpbut/MAjKGGw4qgyNGit1Su xFGrDTyFxtj+tUZIQCil0HAqlXswQ7G2ukB9kQBpxNRfR0V2ANfmfkkGj8+xWauh L1PcaDezNWgAbgWbuf3mHNiwDMxWsNfcwCbx3P8sF+vRe7q5RdIFNL1oXJkPxQVy C6L29KcaLYxToNmUNyrOncWAj8KSlrDngVq3NXnG34lVzqz2t/ouc/0lX4Jc9qTL mGwYoXvlTqQgV4hGQPfDufApaukxgZfcSidSfqlNt1amYYNiYcvIyf15dht87ipB 27ahZWDKvunB4gqMG62XNHyiu9bKmDCyL57ggUBt3wxJ7H9M/OgjsI7C/i/10SMT D75GjYaU2TWyGLd4SvbV6/3pA3zAZ0Ffqc66uANwfBXC7jFd2/ykEBir3vJYTq17 r2YWYgH2sma5kwb7ZHQhLKk+N2a0g1KX+Mr0V2wJ+yAYwkbz6wu/BVDXstBFkumJ /iKmtOn0Mk07wo/3wvWu5M4tk4kZzmLzs1/ybH3GWOUbFUxbqgOos3/0Vi/uSW88 Yxf4bV/uBmU=HlZ2 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . This product release includes bug fixes and security update for the following packages: windows-machine-config-operator and windows-machine-config-operator-bundle. Description:
Red Hat OpenShift support for Windows Containers allows you to deploy Windows container workloads running on Windows Server containers. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly
- JIRA issues fixed (https://issues.redhat.com/):
OCPBUGS-10418 - Case sensitivity issue when label "openshift.io/cluster-monitoring" set to 'True' on openshift-windows-machine-config-operator namespace OCPBUGS-11831 - oc adm node-logs failing in vSphere CI OCPBUGS-15435 - Instance configurations fails on Windows Server 2019 without the container feature OCPBUGS-3572 - Check if Windows defender is running doesnt work OCPBUGS-4247 - Load balancer shows connectivity outage during Windows nodes upgrade OCPBUGS-5894 - Windows nodes do not get drained (deconfigure) during the upgrade process OCPBUGS-7726 - WMCO kubelet version not matching OCP payload's one OCPBUGS-8055 - containerd version is being misreported WINC-818 - Investigate if the Upgradeable condition is being tested in e2e suite WINC-823 - Test generated community manifests in WMCO e2e
- Description:
VolSync is a Kubernetes operator that enables asynchronous replication of persistent volumes within a cluster, or across clusters. After deploying the VolSync operator, it can create and maintain copies of your persistent data.
For more information about VolSync, see:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync
or the VolSync open source community website at: https://volsync.readthedocs.io/en/stable/.
Security fix(es): * CVE-2023-3089 openshift: OCP & FIPS mode
- Bugs fixed (https://bugzilla.redhat.com/):
2212085 - CVE-2023-3089 openshift: OCP & FIPS mode
- This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience. After installing the updated packages, the httpd daemon will be restarted automatically. Bugs fixed (https://bugzilla.redhat.com/):
2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN
-
Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2023-01-23-5 macOS Monterey 12.6.3
macOS Monterey 12.6.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213604.
AppleMobileFileIntegrity Available for: macOS Monterey Impact: An app may be able to access user-sensitive data Description: This issue was addressed by enabling hardened runtime. CVE-2023-23499: Wojciech Reguła (@_r3ggi) of SecuRing (wojciechregula.blog)
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.86.0. CVE-2022-42915 CVE-2022-42916 CVE-2022-32221 CVE-2022-35260
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.85.0. CVE-2022-35252
dcerpc Available for: macOS Monterey Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco Talos
DiskArbitration Available for: macOS Monterey Impact: An encrypted volume may be unmounted and remounted by a different user without prompting for the password Description: A logic issue was addressed with improved state management. CVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)
DriverKit Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved checks. CVE-2022-32915: Tommy Muir (@Muirey03)
Intel Graphics Driver Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2023-23507: an anonymous researcher
Kernel Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2023-23504: Adam Doupé of ASU SEFCOM
Kernel Available for: macOS Monterey Impact: An app may be able to determine kernel memory layout Description: An information disclosure issue was addressed by removing the vulnerable code. CVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. (@starlabs_sg)
PackageKit Available for: macOS Monterey Impact: An app may be able to gain root privileges Description: A logic issue was addressed with improved state management. CVE-2023-23497: Mickey Jin (@patch1t)
Screen Time Available for: macOS Monterey Impact: An app may be able to access information about a user’s contacts Description: A privacy issue was addressed with improved private data redaction for log entries. CVE-2023-23505: Wojciech Regula of SecuRing (wojciechregula.blog)
Weather Available for: macOS Monterey Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an anonymous researcher
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: The issue was addressed with improved memory handling. WebKit Bugzilla: 248268 CVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE WebKit Bugzilla: 248268 CVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE
Windows Installer Available for: macOS Monterey Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23508: Mickey Jin (@patch1t)
Additional recognition
Kernel We would like to acknowledge Nick Stenning of Replicate for their assistance.
macOS Monterey 12.6.3 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
The following advisory data is extracted from:
https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0428.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202208-2263",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0.0"
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.3"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.85.0"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.3"
},
{
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "clustered data ontap",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": "11.0 that\u0027s all 11.7.3"
},
{
"model": "curl",
"scope": null,
"trust": 0.8,
"vendor": "haxx",
"version": null
},
{
"model": "h700s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": "12.0.0 that\u0027s all 12.6.3"
},
{
"model": "h500s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "gnu/linux",
"scope": null,
"trust": 0.8,
"vendor": "debian",
"version": null
},
{
"model": "solidfire",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "bootstrap os",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "element software",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.85.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.7.3",
"versionStartIncluding": "11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.6.3",
"versionStartIncluding": "12.0.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174021"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "176746"
}
],
"trust": 0.8
},
"cve": "CVE-2022-35252",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "LOW",
"baseScore": 3.7,
"baseSeverity": "LOW",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.2,
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L",
"version": "3.1"
},
{
"attackComplexity": "High",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "Low",
"baseScore": 3.7,
"baseSeverity": "Low",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2022-35252",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-35252",
"trust": 1.8,
"value": "LOW"
},
{
"author": "CNNVD",
"id": "CNNVD-202208-4523",
"trust": 0.6,
"value": "LOW"
}
]
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "When curl is used to retrieve and parse cookies from a HTTP(S) server, itaccepts cookies using control codes that when later are sent back to a HTTPserver might make the server return 400 responses. Effectively allowing a\"sister site\" to deny service to all siblings. Haxx of cURL Products from other vendors have unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. A security vulnerability exists in curl versions 4.9 through 7.84. ==========================================================================\nUbuntu Security Notice USN-5587-1\nSeptember 01, 2022\n\ncurl vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\ncurl could be denied access to a HTTP(S) content if it recieved\na specially crafted cookie. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nAxel Chong discovered that when curl accepted and sent back\ncookies containing control bytes that a HTTP(S) server might\nreturn a 400 (Bad Request Error) response. A malicious cookie\nhost could possibly use this to cause denial-of-service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\ncurl 7.81.0-1ubuntu1.4\nlibcurl3-gnutls 7.81.0-1ubuntu1.4\nlibcurl3-nss 7.81.0-1ubuntu1.4\nlibcurl4 7.81.0-1ubuntu1.4\n\nUbuntu 20.04 LTS:\ncurl 7.68.0-1ubuntu2.13\nlibcurl3-gnutls 7.68.0-1ubuntu2.13\nlibcurl3-nss 7.68.0-1ubuntu2.13\nlibcurl4 7.68.0-1ubuntu2.13\n\nUbuntu 18.04 LTS:\ncurl 7.58.0-2ubuntu3.20\nlibcurl3-gnutls 7.58.0-2ubuntu3.20\nlibcurl3-nss 7.58.0-2ubuntu3.20\nlibcurl4 7.58.0-2ubuntu3.20\n\nUbuntu 16.04 ESM:\ncurl 7.47.0-1ubuntu2.19+esm5\nlibcurl3 7.47.0-1ubuntu2.19+esm5\nlibcurl3-gnutls 7.47.0-1ubuntu2.19+esm5\nlibcurl3-nss 7.47.0-1ubuntu2.19+esm5\n\nUbuntu 14.04 ESM:\ncurl 7.35.0-1ubuntu2.20+esm12\nlibcurl3 7.35.0-1ubuntu2.20+esm12\nlibcurl3-gnutls 7.35.0-1ubuntu2.20+esm12\nlibcurl3-nss 7.35.0-1ubuntu2.20+esm12\n\nIn general, a standard system update will make all the necessary changes. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.6.6 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/\n\nSecurity Fix(es):\n* CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command\n* CVE-2023-32314 vm2: Sandbox Escape\n* CVE-2023-32313 vm2: Inspect Manipulation\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation for details on how to install the images:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/install/installing#installing-while-connected-online\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2187525 - CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command\n2208376 - CVE-2023-32314 vm2: Sandbox Escape\n2208377 - CVE-2023-32313 vm2: Inspect Manipulation\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Low: curl security update\nAdvisory ID: RHSA-2023:2478-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:2478\nIssue date: 2023-05-09\nCVE Names: CVE-2022-35252 CVE-2022-43552\n====================================================================\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Low. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: Incorrect handling of control code characters in cookies\n(CVE-2022-35252)\n\n* curl: Use-after-free triggered by an HTTP proxy deny response\n(CVE-2022-43552)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.2 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2120718 - CVE-2022-35252 curl: Incorrect handling of control code characters in cookies\n2152652 - CVE-2022-43552 curl: Use-after-free triggered by an HTTP proxy deny response\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\naarch64:\ncurl-debuginfo-7.76.1-23.el9.aarch64.rpm\ncurl-debugsource-7.76.1-23.el9.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-devel-7.76.1-23.el9.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\n\nppc64le:\ncurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\ncurl-debugsource-7.76.1-23.el9.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-devel-7.76.1-23.el9.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\n\ns390x:\ncurl-debuginfo-7.76.1-23.el9.s390x.rpm\ncurl-debugsource-7.76.1-23.el9.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-devel-7.76.1-23.el9.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\n\nx86_64:\ncurl-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-debuginfo-7.76.1-23.el9.x86_64.rpm\ncurl-debugsource-7.76.1-23.el9.i686.rpm\ncurl-debugsource-7.76.1-23.el9.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-devel-7.76.1-23.el9.i686.rpm\nlibcurl-devel-7.76.1-23.el9.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 9):\n\nSource:\ncurl-7.76.1-23.el9.src.rpm\n\naarch64:\ncurl-7.76.1-23.el9.aarch64.rpm\ncurl-debuginfo-7.76.1-23.el9.aarch64.rpm\ncurl-debugsource-7.76.1-23.el9.aarch64.rpm\ncurl-minimal-7.76.1-23.el9.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-7.76.1-23.el9.aarch64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-minimal-7.76.1-23.el9.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\n\nppc64le:\ncurl-7.76.1-23.el9.ppc64le.rpm\ncurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\ncurl-debugsource-7.76.1-23.el9.ppc64le.rpm\ncurl-minimal-7.76.1-23.el9.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-7.76.1-23.el9.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-minimal-7.76.1-23.el9.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\n\ns390x:\ncurl-7.76.1-23.el9.s390x.rpm\ncurl-debuginfo-7.76.1-23.el9.s390x.rpm\ncurl-debugsource-7.76.1-23.el9.s390x.rpm\ncurl-minimal-7.76.1-23.el9.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-7.76.1-23.el9.s390x.rpm\nlibcurl-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-minimal-7.76.1-23.el9.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\n\nx86_64:\ncurl-7.76.1-23.el9.x86_64.rpm\ncurl-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-debuginfo-7.76.1-23.el9.x86_64.rpm\ncurl-debugsource-7.76.1-23.el9.i686.rpm\ncurl-debugsource-7.76.1-23.el9.x86_64.rpm\ncurl-minimal-7.76.1-23.el9.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-7.76.1-23.el9.i686.rpm\nlibcurl-7.76.1-23.el9.x86_64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-minimal-7.76.1-23.el9.i686.rpm\nlibcurl-minimal-7.76.1-23.el9.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-35252\nhttps://access.redhat.com/security/cve/CVE-2022-43552\nhttps://access.redhat.com/security/updates/classification/#low\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBZFo0V9zjgjWX9erEAQhmTw/9FUwLCGRKCmddNVTMAaay54EPggJFOPKx\nnN06YIqiK5arkX4SD58YZrX9J0gUZcwGs6s5WO35pG3F+qJXhe8E8fbzavqRG5NB\noxG+pDC5+6xQxK41tkuLYJoUhF1w4yG8SuMSzroLcpbut/MAjKGGw4qgyNGit1Su\nxFGrDTyFxtj+tUZIQCil0HAqlXswQ7G2ukB9kQBpxNRfR0V2ANfmfkkGj8+xWauh\nL1PcaDezNWgAbgWbuf3mHNiwDMxWsNfcwCbx3P8sF+vRe7q5RdIFNL1oXJkPxQVy\nC6L29KcaLYxToNmUNyrOncWAj8KSlrDngVq3NXnG34lVzqz2t/ouc/0lX4Jc9qTL\nmGwYoXvlTqQgV4hGQPfDufApaukxgZfcSidSfqlNt1amYYNiYcvIyf15dht87ipB\n27ahZWDKvunB4gqMG62XNHyiu9bKmDCyL57ggUBt3wxJ7H9M/OgjsI7C/i/10SMT\nD75GjYaU2TWyGLd4SvbV6/3pA3zAZ0Ffqc66uANwfBXC7jFd2/ykEBir3vJYTq17\nr2YWYgH2sma5kwb7ZHQhLKk+N2a0g1KX+Mr0V2wJ+yAYwkbz6wu/BVDXstBFkumJ\n/iKmtOn0Mk07wo/3wvWu5M4tk4kZzmLzs1/ybH3GWOUbFUxbqgOos3/0Vi/uSW88\nYxf4bV/uBmU=HlZ2\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. This product release includes bug fixes and security\nupdate for the following packages: windows-machine-config-operator and\nwindows-machine-config-operator-bundle. Description:\n\nRed Hat OpenShift support for Windows Containers allows you to deploy\nWindows container workloads running on Windows Server containers. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly\n\n5. JIRA issues fixed (https://issues.redhat.com/):\n\nOCPBUGS-10418 - Case sensitivity issue when label \"openshift.io/cluster-monitoring\" set to \u0027True\u0027 on openshift-windows-machine-config-operator namespace\nOCPBUGS-11831 - oc adm node-logs failing in vSphere CI\nOCPBUGS-15435 - Instance configurations fails on Windows Server 2019 without the container feature\nOCPBUGS-3572 - Check if Windows defender is running doesnt work\nOCPBUGS-4247 - Load balancer shows connectivity outage during Windows nodes upgrade\nOCPBUGS-5894 - Windows nodes do not get drained (deconfigure) during the upgrade process\nOCPBUGS-7726 - WMCO kubelet version not matching OCP payload\u0027s one\nOCPBUGS-8055 - containerd version is being misreported\nWINC-818 - Investigate if the Upgradeable condition is being tested in e2e suite\nWINC-823 - Test generated community manifests in WMCO e2e\n\n6. Description:\n\nVolSync is a Kubernetes operator that enables asynchronous replication of\npersistent volumes within a cluster, or across clusters. After deploying\nthe VolSync operator, it can create and maintain copies of your persistent\ndata. \n\nFor more information about VolSync, see:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync\n\nor the VolSync open source community website at:\nhttps://volsync.readthedocs.io/en/stable/. \n\nSecurity fix(es): * CVE-2023-3089 openshift: OCP \u0026 FIPS mode\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2212085 - CVE-2023-3089 openshift: OCP \u0026 FIPS mode\n\n5. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. After installing the updated packages, the\nhttpd daemon will be restarted automatically. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n\n6. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2023-01-23-5 macOS Monterey 12.6.3\n\nmacOS Monterey 12.6.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213604. \n\nAppleMobileFileIntegrity\nAvailable for: macOS Monterey\nImpact: An app may be able to access user-sensitive data\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2023-23499: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n(wojciechregula.blog)\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.86.0. \nCVE-2022-42915\nCVE-2022-42916\nCVE-2022-32221\nCVE-2022-35260\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.85.0. \nCVE-2022-35252\n\ndcerpc\nAvailable for: macOS Monterey\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco\nTalos\n\nDiskArbitration\nAvailable for: macOS Monterey\nImpact: An encrypted volume may be unmounted and remounted by a\ndifferent user without prompting for the password\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)\n\nDriverKit\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A type confusion issue was addressed with improved\nchecks. \nCVE-2022-32915: Tommy Muir (@Muirey03)\n\nIntel Graphics Driver\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2023-23507: an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23504: Adam Doup\u00e9 of ASU SEFCOM\n\nKernel\nAvailable for: macOS Monterey\nImpact: An app may be able to determine kernel memory layout\nDescription: An information disclosure issue was addressed by\nremoving the vulnerable code. \nCVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. (@starlabs_sg)\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An app may be able to gain root privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23497: Mickey Jin (@patch1t)\n\nScreen Time\nAvailable for: macOS Monterey\nImpact: An app may be able to access information about a user\u2019s\ncontacts\nDescription: A privacy issue was addressed with improved private data\nredaction for log entries. \nCVE-2023-23505: Wojciech Regula of SecuRing (wojciechregula.blog)\n\nWeather\nAvailable for: macOS Monterey\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an\nanonymous researcher\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: The issue was addressed with improved memory handling. \nWebKit Bugzilla: 248268\nCVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\nWebKit Bugzilla: 248268\nCVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\n\nWindows Installer\nAvailable for: macOS Monterey\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23508: Mickey Jin (@patch1t)\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Nick Stenning of Replicate for their\nassistance. \n\nmacOS Monterey 12.6.3 may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThe following advisory data is extracted from:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0428.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-35252"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174021"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "176746"
}
],
"trust": 2.79
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-35252",
"trust": 4.5
},
{
"db": "HACKERONE",
"id": "1613943",
"trust": 2.5
},
{
"db": "PACKETSTORM",
"id": "168239",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-23-103-09",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-23-075-01",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-23-131-05",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-23-166-12",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU98195668",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU99752892",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU94715153",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU99464755",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-018757",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.4343",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4375",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.2163",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3060",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4374",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "170698",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-428403",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-35252",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172378",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172587",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172195",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "174021",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "174080",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170166",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170165",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170697",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "176746",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174021"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"id": "VAR-202208-2263",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-428403"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:44:51.339000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "HT213604",
"trust": 0.8,
"url": "https://lists.debian.org/debian-lts-announce/2023/01/msg00028.html"
},
{
"title": "curl Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=206230"
},
{
"title": "Debian CVElist Bug Report Logs: curl: CVE-2022-35252: control code in cookie denial of service",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=f071eb46e3ac96bc3c50d0406c2d0685"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/jtmotox/docker-trivy "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "NVD-CWE-noinfo",
"trust": 1.0
},
{
"problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.6,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 2.5,
"url": "http://seclists.org/fulldisclosure/2023/jan/20"
},
{
"trust": 2.5,
"url": "http://seclists.org/fulldisclosure/2023/jan/21"
},
{
"trust": 2.5,
"url": "https://hackerone.com/reports/1613943"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220930-0005/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213603"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213604"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2023/01/msg00028.html"
},
{
"trust": 1.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 1.3,
"url": "https://access.redhat.com/security/cve/cve-2022-35252"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99464755/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99752892/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu94715153/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu98195668/"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-166-12"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-075-01"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-103-09"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-131-05"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170698/apple-security-advisory-2023-01-23-6.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.2163"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3060"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-35252/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213604"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-denial-of-service-via-cookies-control-codes-39156"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168239/ubuntu-security-notice-usn-5587-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4374"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4343"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4375"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-43552"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43552"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2023-0361"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2023-27535"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-36227"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-1667"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-26604"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-36227"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-2283"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-27535"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-1667"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-26604"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-24736"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24736"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-0361"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-2283"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32207"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-31813"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42915"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28615"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42916"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22721"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28614"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28330"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26377"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23943"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30522"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1018831"
},
{
"trust": 0.1,
"url": "https://github.com/jtmotox/docker-trivy"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.20"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5587-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.81.0-1ubuntu1.4"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.13"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.8_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:2963"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41674"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-43750"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3239"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26341"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3239"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25815"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42722"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1679"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3707"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1582"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1462"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-22490"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3028"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-20141"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-32314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-47929"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39188"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-32313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1999"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26341"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3627"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-20141"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-28856"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23454"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25265"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39189"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3970"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3028"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3567"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0394"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0461"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33655"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25652"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33655"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:3326"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3564"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1195"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/install/installing#installing-while-connected-online"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42703"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25265"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-29007"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1462"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1679"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:2478"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27191"
},
{
"trust": 0.1,
"url": "https://issues.redhat.com/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25173"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4488"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27191"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-25173"
},
{
"trust": 0.1,
"url": "https://volsync.readthedocs.io/en/stable/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-38408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync-rep"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-3089"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-24329"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2023-001"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-24329"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-3089"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8840"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8841"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23507"
},
{
"trust": 0.1,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23493"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23497"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23504"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23505"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23499"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23508"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213604."
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23502"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2152652"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2024:0428"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0428.json"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2179073"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2120718"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2179092"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2252030"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2196793"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174021"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174021"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-09-23T00:00:00",
"db": "VULHUB",
"id": "VHN-428403"
},
{
"date": "2023-10-23T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"date": "2022-09-02T15:21:41",
"db": "PACKETSTORM",
"id": "168239"
},
{
"date": "2023-05-16T17:09:54",
"db": "PACKETSTORM",
"id": "172378"
},
{
"date": "2023-05-26T14:34:05",
"db": "PACKETSTORM",
"id": "172587"
},
{
"date": "2023-05-09T15:14:58",
"db": "PACKETSTORM",
"id": "172195"
},
{
"date": "2023-08-07T15:59:32",
"db": "PACKETSTORM",
"id": "174021"
},
{
"date": "2023-08-09T15:56:32",
"db": "PACKETSTORM",
"id": "174080"
},
{
"date": "2022-12-08T21:28:44",
"db": "PACKETSTORM",
"id": "170166"
},
{
"date": "2022-12-08T21:28:21",
"db": "PACKETSTORM",
"id": "170165"
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303"
},
{
"date": "2023-01-24T16:41:07",
"db": "PACKETSTORM",
"id": "170697"
},
{
"date": "2024-01-26T15:24:15",
"db": "PACKETSTORM",
"id": "176746"
},
{
"date": "2022-08-31T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"date": "2022-09-23T14:15:12.323000",
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-03-01T00:00:00",
"db": "VULHUB",
"id": "VHN-428403"
},
{
"date": "2023-10-23T07:11:00",
"db": "JVNDB",
"id": "JVNDB-2022-018757"
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"date": "2024-03-27T15:00:36.607000",
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Haxx\u00a0 of \u00a0cURL\u00a0 Vulnerabilities in Products from Other Vendors",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-018757"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
],
"trust": 0.6
}
}
VAR-202203-0664
Vulnerability from variot - Updated: 2024-07-23 21:44BIND 9.11.0 -> 9.11.36 9.12.0 -> 9.16.26 9.17.0 -> 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -> 9.11.36-S1 9.16.8-S1 -> 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220) By flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver's performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795) By spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177) By spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). ========================================================================== Ubuntu Security Notice USN-5332-1 March 17, 2022
bind9 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in Bind.
Software Description: - bind9: Internet Domain Name Server
Details:
Xiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind incorrectly handled certain bogus NS records when using forwarders. A remote attacker could possibly use this issue to manipulate cache results. (CVE-2021-25220)
It was discovered that Bind incorrectly handled certain crafted TCP streams. A remote attacker could possibly use this issue to cause Bind to consume resources, leading to a denial of service. This issue only affected Ubuntu 21.10. (CVE-2022-0396)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.10: bind9 1:9.16.15-1ubuntu1.2
Ubuntu 20.04 LTS: bind9 1:9.16.1-0ubuntu2.10
Ubuntu 18.04 LTS: bind9 1:9.11.3+dfsg-1ubuntu1.17
In general, a standard system update will make all the necessary changes.
For the oldstable distribution (buster), this problem has been fixed in version 1:9.11.5.P4+dfsg-5.1+deb10u7.
For the stable distribution (bullseye), this problem has been fixed in version 1:9.16.27-1~deb11u1.
We recommend that you upgrade your bind9 packages.
For the detailed security status of bind9 please refer to its security tracker page at: https://security-tracker.debian.org/tracker/bind9
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8 Tjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9 IgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW iRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G ELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV G/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl MuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8 glQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU rbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W bYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4 +8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8 QygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ= =6st2 -----END PGP SIGNATURE----- . 8) - aarch64, ppc64le, s390x, x86_64
-
8) - aarch64, noarch, ppc64le, s390x, x86_64
-
Gentoo Linux Security Advisory GLSA 202210-25
https://security.gentoo.org/
Severity: Low Title: ISC BIND: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #820563, #835439, #872206 ID: 202210-25
Synopsis
Multiple vulnerabilities have been discovered in ISC BIND, the worst of which could result in denial of service.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-dns/bind < 9.16.33 >= 9.16.33 2 net-dns/bind-tools < 9.16.33 >= 9.16.33
Description
Multiple vulnerabilities have been discovered in ISC BIND. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All ISC BIND users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-9.16.33"
All ISC BIND-tools users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-tools-9.16.33"
References
[ 1 ] CVE-2021-25219 https://nvd.nist.gov/vuln/detail/CVE-2021-25219 [ 2 ] CVE-2021-25220 https://nvd.nist.gov/vuln/detail/CVE-2021-25220 [ 3 ] CVE-2022-0396 https://nvd.nist.gov/vuln/detail/CVE-2022-0396 [ 4 ] CVE-2022-2795 https://nvd.nist.gov/vuln/detail/CVE-2022-2795 [ 5 ] CVE-2022-2881 https://nvd.nist.gov/vuln/detail/CVE-2022-2881 [ 6 ] CVE-2022-2906 https://nvd.nist.gov/vuln/detail/CVE-2022-2906 [ 7 ] CVE-2022-3080 https://nvd.nist.gov/vuln/detail/CVE-2022-3080 [ 8 ] CVE-2022-38177 https://nvd.nist.gov/vuln/detail/CVE-2022-38177 [ 9 ] CVE-2022-38178 https://nvd.nist.gov/vuln/detail/CVE-2022-38178
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-25
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: bind security update Advisory ID: RHSA-2023:0402-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0402 Issue date: 2023-01-24 CVE Names: CVE-2021-25220 CVE-2022-2795 ==================================================================== 1. Summary:
An update for bind is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly.
Security Fix(es):
-
bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)
-
bind: processing large delegations may severely degrade resolver performance (CVE-2022-2795)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
After installing the update, the BIND daemon (named) will be restarted automatically.
- Bugs fixed (https://bugzilla.redhat.com/):
2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability 2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
ppc64: bind-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm
ppc64le: bind-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm
s390x: bind-9.11.4-26.P2.el7_9.13.s390x.rpm bind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm bind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm
ppc64le: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm
s390x: bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-25220 https://access.redhat.com/security/cve/CVE-2022-2795 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3 iaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp U2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a 8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj MUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns gE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0 wJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb PC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd zTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP VVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim NG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33 eDGIrZR4jEY=azJw -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network.
The following advisory data is extracted from:
https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_2720.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202203-0664",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.16.8"
},
{
"model": "bind",
"scope": "lt",
"trust": 1.0,
"vendor": "isc",
"version": "9.16.27"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "19.4"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "20.3"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "19.3"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.2"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "20.2"
},
{
"model": "bind",
"scope": "lte",
"trust": 1.0,
"vendor": "isc",
"version": "9.18.0"
},
{
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.11.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "junos",
"scope": "lt",
"trust": 1.0,
"vendor": "juniper",
"version": "19.3"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.4"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.1"
},
{
"model": "bind",
"scope": "lt",
"trust": 1.0,
"vendor": "isc",
"version": "9.11.37"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "22.1"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.12.0"
},
{
"model": "sinec ins",
"scope": "eq",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.3"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "22.2"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.17.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.11.4"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "sinec ins",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "20.4"
},
{
"model": "bind",
"scope": null,
"trust": 0.8,
"vendor": "isc",
"version": null
},
{
"model": "fedora",
"scope": null,
"trust": 0.8,
"vendor": "fedora",
"version": null
},
{
"model": "esmpro/serveragent",
"scope": null,
"trust": 0.8,
"vendor": "\u65e5\u672c\u96fb\u6c17",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:supported_preview:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.16.27",
"versionStartIncluding": "9.16.8",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:supported_preview:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.11.37",
"versionStartIncluding": "9.11.4",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.16.27",
"versionStartIncluding": "9.12.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.11.37",
"versionStartIncluding": "9.11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*",
"cpe_name": [],
"versionEndIncluding": "9.18.0",
"versionStartIncluding": "9.17.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "19.3",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s7:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s5:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s6:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s5:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s6:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s6:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s5:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s5:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s8:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s6:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s7:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s7:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r1-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r3-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r1-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r3-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r1-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r2-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r2-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r1-s2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:22.1:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:22.1:r1-s1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:juniper:junos:22.2:r1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:juniper:srx100:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx110:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx1400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx1500:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx210:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx220:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx240:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx240h2:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx240m:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx300:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx320:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx340:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx3400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx345:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx3600:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx380:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx4000:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx4100:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx4200:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx4600:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx5000:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx5400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx550:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx550_hm:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx550m:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx5600:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx5800:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
},
{
"cpe23Uri": "cpe:2.3:h:juniper:srx650:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Siemens reported these vulnerabilities to CISA.",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
}
],
"trust": 0.6
},
"cve": "CVE-2021-25220",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "SINGLE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 4.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.0,
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:S/C:N/I:P/A:N",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 5.0,
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2021-25220",
"impactScore": null,
"integrityImpact": "Partial",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.8,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "SINGLE",
"author": "VULMON",
"availabilityImpact": "NONE",
"baseScore": 4.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.0,
"id": "CVE-2021-25220",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "MEDIUM",
"trust": 0.1,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:S/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 6.8,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.3,
"impactScore": 4.0,
"integrityImpact": "HIGH",
"privilegesRequired": "HIGH",
"scope": "CHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 8.6,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2021-25220",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "None",
"scope": "Changed",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:H/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-25220",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "security-officer@isc.org",
"id": "CVE-2021-25220",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "NVD",
"id": "CVE-2021-25220",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202203-1514",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-25220",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "BIND 9.11.0 -\u003e 9.11.36 9.12.0 -\u003e 9.16.26 9.17.0 -\u003e 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -\u003e 9.11.36-S1 9.16.8-S1 -\u003e 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220)\nBy flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver\u0027s performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795)\nBy spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177)\nBy spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). ==========================================================================\nUbuntu Security Notice USN-5332-1\nMarch 17, 2022\n\nbind9 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Bind. \n\nSoftware Description:\n- bind9: Internet Domain Name Server\n\nDetails:\n\nXiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind\nincorrectly handled certain bogus NS records when using forwarders. A\nremote attacker could possibly use this issue to manipulate cache results. \n(CVE-2021-25220)\n\nIt was discovered that Bind incorrectly handled certain crafted TCP\nstreams. A remote attacker could possibly use this issue to cause Bind to\nconsume resources, leading to a denial of service. This issue only affected\nUbuntu 21.10. (CVE-2022-0396)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n bind9 1:9.16.15-1ubuntu1.2\n\nUbuntu 20.04 LTS:\n bind9 1:9.16.1-0ubuntu2.10\n\nUbuntu 18.04 LTS:\n bind9 1:9.11.3+dfsg-1ubuntu1.17\n\nIn general, a standard system update will make all the necessary changes. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1:9.11.5.P4+dfsg-5.1+deb10u7. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1:9.16.27-1~deb11u1. \n\nWe recommend that you upgrade your bind9 packages. \n\nFor the detailed security status of bind9 please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/bind9\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8\nTjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9\nIgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW\niRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G\nELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV\nG/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl\nMuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8\nglQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU\nrbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W\nbYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4\n+8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8\nQygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ=\n=6st2\n-----END PGP SIGNATURE-----\n. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-25\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: ISC BIND: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #820563, #835439, #872206\n ID: 202210-25\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC BIND, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-dns/bind \u003c 9.16.33 \u003e= 9.16.33\n 2 net-dns/bind-tools \u003c 9.16.33 \u003e= 9.16.33\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC BIND. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC BIND users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-9.16.33\"\n\nAll ISC BIND-tools users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-tools-9.16.33\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25219\n https://nvd.nist.gov/vuln/detail/CVE-2021-25219\n[ 2 ] CVE-2021-25220\n https://nvd.nist.gov/vuln/detail/CVE-2021-25220\n[ 3 ] CVE-2022-0396\n https://nvd.nist.gov/vuln/detail/CVE-2022-0396\n[ 4 ] CVE-2022-2795\n https://nvd.nist.gov/vuln/detail/CVE-2022-2795\n[ 5 ] CVE-2022-2881\n https://nvd.nist.gov/vuln/detail/CVE-2022-2881\n[ 6 ] CVE-2022-2906\n https://nvd.nist.gov/vuln/detail/CVE-2022-2906\n[ 7 ] CVE-2022-3080\n https://nvd.nist.gov/vuln/detail/CVE-2022-3080\n[ 8 ] CVE-2022-38177\n https://nvd.nist.gov/vuln/detail/CVE-2022-38177\n[ 9 ] CVE-2022-38178\n https://nvd.nist.gov/vuln/detail/CVE-2022-38178\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-25\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: bind security update\nAdvisory ID: RHSA-2023:0402-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:0402\nIssue date: 2023-01-24\nCVE Names: CVE-2021-25220 CVE-2022-2795\n====================================================================\n1. Summary:\n\nAn update for bind is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe Berkeley Internet Name Domain (BIND) is an implementation of the Domain\nName System (DNS) protocols. BIND includes a DNS server (named); a resolver\nlibrary (routines for applications to use when interfacing with DNS); and\ntools for verifying that the DNS server is operating correctly. \n\nSecurity Fix(es):\n\n* bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)\n\n* bind: processing large delegations may severely degrade resolver\nperformance (CVE-2022-2795)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing the update, the BIND daemon (named) will be restarted\nautomatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability\n2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nppc64:\nbind-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25220\nhttps://access.redhat.com/security/cve/CVE-2022-2795\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3\niaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp\nU2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a\n8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj\nMUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns\ngE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0\nwJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb\nPC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd\nzTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP\nVVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim\nNG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33\neDGIrZR4jEY=azJw\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. \n\nThe following advisory data is extracted from:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_2720.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "PACKETSTORM",
"id": "166356"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "PACKETSTORM",
"id": "169261"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "178475"
}
],
"trust": 2.52
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-25220",
"trust": 4.2
},
{
"db": "SIEMENS",
"id": "SSA-637483",
"trust": 1.7
},
{
"db": "ICS CERT",
"id": "ICSA-22-258-05",
"trust": 1.5
},
{
"db": "JVN",
"id": "JVNVU99475301",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU98927070",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166356",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169773",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169587",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170724",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169846",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.1150",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5750",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4616",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1223",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1289",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2694",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1183",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1160",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032124",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031701",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031728",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "169894",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2021-25220",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166354",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169261",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169745",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "178475",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "PACKETSTORM",
"id": "166356"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "PACKETSTORM",
"id": "169261"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "178475"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"id": "VAR-202203-0664",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.20766129
},
"last_update_date": "2024-07-23T21:44:12.287000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "NV22-009",
"trust": 0.8,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
},
{
"title": "Ubuntu Security Notice: USN-5332-2: Bind vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5332-2"
},
{
"title": "Red Hat: Moderate: dhcp security and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228385 - security advisory"
},
{
"title": "Red Hat: Moderate: bind security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227790 - security advisory"
},
{
"title": "Ubuntu Security Notice: USN-5332-1: Bind vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5332-1"
},
{
"title": "Red Hat: Moderate: bind security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228068 - security advisory"
},
{
"title": "Red Hat: Moderate: bind security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230402 - security advisory"
},
{
"title": "Debian Security Advisories: DSA-5105-1 bind9 -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=16d84b908a424f50b3236db9219500e3"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-25220"
},
{
"title": "Amazon Linux 2: ALAS2-2023-2001",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2023-2001"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-166",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-166"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-138",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-138"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/live-hack-cve/cve-2021-25220 "
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-444",
"trust": 1.0
},
{
"problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.8,
"url": "https://kb.isc.org/v1/docs/cve-2021-25220"
},
{
"trust": 1.8,
"url": "https://security.gentoo.org/glsa/202210-25"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220408-0001/"
},
{
"trust": 1.7,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
},
{
"trust": 1.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25220"
},
{
"trust": 1.6,
"url": "https://supportportal.juniper.net/s/article/2022-10-security-bulletin-junos-os-srx-series-cache-poisoning-vulnerability-in-bind-used-by-dns-proxy-cve-2021-25220?language=en_us"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/cve/cve-2021-25220"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
},
{
"trust": 0.9,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
},
{
"trust": 0.8,
"url": "http://jvn.jp/vu/jvnvu98927070/index.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99475301/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169846/red-hat-security-advisory-2022-8385-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1223"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1289"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/isc-bind-spoofing-via-dns-forwarders-cache-poisoning-37754"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169894/red-hat-security-advisory-2022-8068-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031728"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166356/ubuntu-security-notice-usn-5332-2.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1150"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1183"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1160"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169773/red-hat-security-advisory-2022-7643-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170724/red-hat-security-advisory-2023-0402-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169587/gentoo-linux-security-advisory-202210-25.html"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2021-25220/"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5750"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031701"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2694"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032124"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0396"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5332-2"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5332-1"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2795"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/444.html"
},
{
"trust": 0.1,
"url": "https://github.com/live-hack-cve/cve-2021-25220"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/al2/alas-2023-2001.html"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.1-0ubuntu2.10"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.15-1ubuntu1.2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/bind9/1:9.11.3+dfsg-1ubuntu1.17"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/bind9"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7790"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0396"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7643"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2906"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2881"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3080"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0402"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2024:2720"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2128584"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2263896"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2263917"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2064512"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2164032"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2263914"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_2720.json"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "PACKETSTORM",
"id": "166356"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "PACKETSTORM",
"id": "169261"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "178475"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "PACKETSTORM",
"id": "166356"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "PACKETSTORM",
"id": "169261"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "178475"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-03-23T00:00:00",
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"date": "2022-05-12T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"date": "2022-03-17T15:54:34",
"db": "PACKETSTORM",
"id": "166356"
},
{
"date": "2022-03-17T15:54:20",
"db": "PACKETSTORM",
"id": "166354"
},
{
"date": "2022-03-28T19:12:00",
"db": "PACKETSTORM",
"id": "169261"
},
{
"date": "2022-11-08T13:44:36",
"db": "PACKETSTORM",
"id": "169745"
},
{
"date": "2022-11-08T13:49:24",
"db": "PACKETSTORM",
"id": "169773"
},
{
"date": "2022-10-31T14:50:53",
"db": "PACKETSTORM",
"id": "169587"
},
{
"date": "2023-01-25T16:07:50",
"db": "PACKETSTORM",
"id": "170724"
},
{
"date": "2022-11-15T16:40:52",
"db": "PACKETSTORM",
"id": "169846"
},
{
"date": "2024-05-09T15:16:06",
"db": "PACKETSTORM",
"id": "178475"
},
{
"date": "2022-03-09T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"date": "2022-03-23T13:15:07.680000",
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-11-28T00:00:00",
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"date": "2022-09-20T06:12:00",
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"date": "2023-07-24T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"date": "2023-11-09T14:44:33.733000",
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "166356"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
}
],
"trust": 0.8
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "BIND\u00a0 Cache Pollution with Incorrect Records Vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "environmental issue",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
}
],
"trust": 0.6
}
}
VAR-202103-0287
Vulnerability from variot - Updated: 2024-07-23 21:38A flaw possibility of race condition and incorrect initialization of the process id was found in the Linux kernel child/parent process identification handling while filtering signal handlers. A local attacker is able to abuse this flaw to bypass checks to send any signal to a privileged process. Linux Kernel Contains an initialization vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: kernel-rt security and bug fix update Advisory ID: RHSA-2021:1739-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:1739 Issue date: 2021-05-18 CVE Names: CVE-2019-19523 CVE-2019-19528 CVE-2020-0431 CVE-2020-11608 CVE-2020-12114 CVE-2020-12362 CVE-2020-12464 CVE-2020-14314 CVE-2020-14356 CVE-2020-15437 CVE-2020-24394 CVE-2020-25212 CVE-2020-25284 CVE-2020-25285 CVE-2020-25643 CVE-2020-25704 CVE-2020-27786 CVE-2020-27835 CVE-2020-28974 CVE-2020-35508 CVE-2021-0342 ==================================================================== 1. Summary:
An update for kernel-rt is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Real Time (v. 8) - x86_64 Red Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: Integer overflow in Intel(R) Graphics Drivers (CVE-2020-12362)
-
kernel: use-after-free caused by a malicious USB device in the drivers/usb/misc/adutux.c driver (CVE-2019-19523)
-
kernel: use-after-free bug caused by a malicious USB device in the drivers/usb/misc/iowarrior.c driver (CVE-2019-19528)
-
kernel: possible out of bounds write in kbd_keycode of keyboard.c (CVE-2020-0431)
-
kernel: DoS by corrupting mountpoint reference counter (CVE-2020-12114)
-
kernel: use-after-free in usb_sg_cancel function in drivers/usb/core/message.c (CVE-2020-12464)
-
kernel: buffer uses out of index in ext3/4 filesystem (CVE-2020-14314)
-
kernel: Use After Free vulnerability in cgroup BPF component (CVE-2020-14356)
-
kernel: NULL pointer dereference in serial8250_isa_init_ports function in drivers/tty/serial/8250/8250_core.c (CVE-2020-15437)
-
kernel: umask not applied on filesystem without ACL support (CVE-2020-24394)
-
kernel: TOCTOU mismatch in the NFS client code (CVE-2020-25212)
-
kernel: incomplete permission checking for access to rbd devices (CVE-2020-25284)
-
kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c (CVE-2020-25285)
-
kernel: improper input validation in ppp_cp_parse_cr function leads to memory corruption and read overflow (CVE-2020-25643)
-
kernel: perf_event_parse_addr_filter memory (CVE-2020-25704)
-
kernel: use-after-free in kernel midi subsystem (CVE-2020-27786)
-
kernel: child process is able to access parent mm through hfi dev file handle (CVE-2020-27835)
-
kernel: slab-out-of-bounds read in fbcon (CVE-2020-28974)
-
kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting
-
->real_parent (CVE-2020-35508)
-
kernel: use after free in tun_get_user of tun.c could lead to local escalation of privilege (CVE-2021-0342)
-
kernel: NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs in drivers/media/usb/gspca/ov519.c (CVE-2020-11608)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.4 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1783434 - CVE-2019-19523 kernel: use-after-free caused by a malicious USB device in the drivers/usb/misc/adutux.c driver 1783507 - CVE-2019-19528 kernel: use-after-free bug caused by a malicious USB device in the drivers/usb/misc/iowarrior.c driver 1831726 - CVE-2020-12464 kernel: use-after-free in usb_sg_cancel function in drivers/usb/core/message.c 1833445 - CVE-2020-11608 kernel: NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs in drivers/media/usb/gspca/ov519.c 1848652 - CVE-2020-12114 kernel: DoS by corrupting mountpoint reference counter 1853922 - CVE-2020-14314 kernel: buffer uses out of index in ext3/4 filesystem 1868453 - CVE-2020-14356 kernel: Use After Free vulnerability in cgroup BPF component 1869141 - CVE-2020-24394 kernel: umask not applied on filesystem without ACL support 1877575 - CVE-2020-25212 kernel: TOCTOU mismatch in the NFS client code 1879981 - CVE-2020-25643 kernel: improper input validation in ppp_cp_parse_cr function leads to memory corruption and read overflow 1882591 - CVE-2020-25285 kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c 1882594 - CVE-2020-25284 kernel: incomplete permission checking for access to rbd devices 1886109 - BUG: using smp_processor_id() in preemptible [00000000] code: handler106/3082 [rhel-rt-8.4.0] 1894793 - After configure hugepage and reboot test server, kernel got panic status. 1895961 - CVE-2020-25704 kernel: perf_event_parse_addr_filter memory 1896842 - host locks up when running stress-ng itimers on RT kernel. 1897869 - Running oslat in RT guest, guest kernel shows Call Trace: INFO: task kcompactd0:35 blocked for more than 600 seconds. 1900933 - CVE-2020-27786 kernel: use-after-free in kernel midi subsystem 1901161 - CVE-2020-15437 kernel: NULL pointer dereference in serial8250_isa_init_ports function in drivers/tty/serial/8250/8250_core.c 1901709 - CVE-2020-27835 kernel: child process is able to access parent mm through hfi dev file handle 1902724 - CVE-2020-35508 kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting ->real_parent 1903126 - CVE-2020-28974 kernel: slab-out-of-bounds read in fbcon 1915799 - CVE-2021-0342 kernel: use after free in tun_get_user of tun.c could lead to local escalation of privilege 1919889 - CVE-2020-0431 kernel: possible out of bounds write in kbd_keycode of keyboard.c 1930246 - CVE-2020-12362 kernel: Integer overflow in Intel(R) Graphics Drivers
- Package List:
Red Hat Enterprise Linux Real Time for NFV (v. 8):
Source: kernel-rt-4.18.0-305.rt7.72.el8.src.rpm
x86_64: kernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm
Red Hat Enterprise Linux Real Time (v. 8):
Source: kernel-rt-4.18.0-305.rt7.72.el8.src.rpm
x86_64: kernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYKPwgNzjgjWX9erEAQiOVg//YfXIKUxc84y2aRexvrPHeTQvYkFMktq7 NEhNhHqEZbDUabM5+eKb5hoyG44PmXvQuK1njYjEbpTjQss92U8fekGJZAR9Zbsl WEfVcu/ix/UJOzQj/lp+dKhirBSE/33xgBmSsQI6JQc+xn1AoZC8bOeSqyr7J6Y7 t6I552Llhun9DDUGS8KYAM8PkrK3RGQybAS3S4atTdYd0qk42ZPF7/XqrbI7G4iq 0Oe+ZePj6lN1O7pHV0WYUD2yzLTCZZopmz5847BLBEbGLqPyxlShZ+MFGsWxCOHk tW8lw/nqVt/MNlOXI1tD6P6iFZ6JQYrRU5mGFlvsl3t9NQW60MxmcUNPgtVknXW5 BssBM/r6uLi0yFTTnDRZnv2MCs7fIzzqKXOHozrCvItswG6S8Qs72MaW2EQHAEen m7/fMKWTjt9CQudNCm/FwHLb8O9cYnOZwRiAINomo2B/Fi1b7WlquETSmjgQaQNr RxqtgiNQ98q92gnFgC8pCzxmiKRmHLFJEuxXYVq0O8Ch5i/eC8ExoO7Hqe6kYnJe ZaST6fAtb2bMDcPdborfSIUmuDcYdKFtcEfCuuFZIbBxnL2aJDMw0zen/rmDNQyV lwwXoKanoP5EjKKFMc/zkeHlOInMzeHa/0DIlA9h3kpro5eGN0uOPZvsrlryjC+J iJzkORGWplM\xfb/D -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Bugs fixed (https://bugzilla.redhat.com/):
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1328 - Port fix to 5.0.z for BZ-1945168
Bug Fix(es):
-
kernel-rt: update RT source tree to the latest RHEL-8.2.z10 Batch source tree (BZ#1968022)
-
Bugs fixed (https://bugzilla.redhat.com/):
1886285 - CVE-2020-26541 kernel: security bypass in certs/blacklist.c and certs/system_keyring.c 1895961 - CVE-2020-25704 kernel: perf_event_parse_addr_filter memory 1902724 - CVE-2020-35508 kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting ->real_parent 1961305 - CVE-2021-33034 kernel: use-after-free in net/bluetooth/hci_event.c when destroying an hci_chan 1968022 - kernel-rt: update RT source tree to the latest RHEL-8.2.z10 Batch source tree 1970273 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.13. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2021:2122
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
This update fixes the following bug among others:
- Previously, resources for the ClusterOperator were being created early in the update process, which led to update failures when the ClusterOperator had no status condition while Operators were updating. This bug fix changes the timing of when these resources are created. As a result, updates can take place without errors. (BZ#1959238)
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-x86_64
The image digest is sha256:783a2c963f35ccab38e82e6a8c7fa954c3a4551e07d2f43c06098828dd986ed4
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-s390x
The image digest is sha256:4cf44e68413acad063203e1ee8982fd01d8b9c1f8643a5b31cd7ff341b3199cd
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-ppc64le
The image digest is sha256:d47ce972f87f14f1f3c5d50428d2255d1256dae3f45c938ace88547478643e36
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923268 - [Assisted-4.7] [Staging] Using two both spelling "canceled" "cancelled" 1947216 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1953963 - Enable/Disable host operations returns cluster resource with incomplete hosts list 1957749 - ovn-kubernetes pod should have CPU and memory requests set but not limits 1959238 - CVO creating cloud-controller-manager too early causing upgrade failures 1960103 - SR-IOV obliviously reboot the node 1961941 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1962302 - packageserver clusteroperator does not set reason or message for Available condition 1962312 - Deployment considered unhealthy despite being available and at latest generation 1962435 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1963115 - Test verify /run filesystem contents failing
- ========================================================================== Ubuntu Security Notice USN-4752-1 February 25, 2021
linux-oem-5.6 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-oem-5.6: Linux kernel for OEM systems
Details:
Daniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen discovered that legacy pairing and secure-connections pairing authentication in the Bluetooth protocol could allow an unauthenticated user to complete authentication without pairing credentials via adjacent access. A physically proximate attacker could use this to impersonate a previously paired Bluetooth device. (CVE-2020-10135)
Jay Shin discovered that the ext4 file system implementation in the Linux kernel did not properly handle directory access with broken indexing, leading to an out-of-bounds read vulnerability. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-14314)
It was discovered that the block layer implementation in the Linux kernel did not properly perform reference counting in some situations, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-15436)
It was discovered that the serial port driver in the Linux kernel did not properly initialize a pointer in some situations. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2020-15437)
Andy Nguyen discovered that the Bluetooth HCI event packet parser in the Linux kernel did not properly handle event advertisements of certain sizes, leading to a heap-based buffer overflow. A physically proximate remote attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-24490)
It was discovered that the NFS client implementation in the Linux kernel did not properly perform bounds checking before copying security labels in some situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-25212)
It was discovered that the Rados block device (rbd) driver in the Linux kernel did not properly perform privilege checks for access to rbd devices in some situations. A local attacker could use this to map or unmap rbd block devices. (CVE-2020-25284)
It was discovered that the block layer subsystem in the Linux kernel did not properly handle zero-length requests. A local attacker could use this to cause a denial of service. (CVE-2020-25641)
It was discovered that the HDLC PPP implementation in the Linux kernel did not properly validate input in some situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-25643)
Kiyin (尹亮) discovered that the perf subsystem in the Linux kernel did not properly deallocate memory in some situations. A privileged attacker could use this to cause a denial of service (kernel memory exhaustion). (CVE-2020-25704)
It was discovered that the KVM hypervisor in the Linux kernel did not properly handle interrupts in certain situations. A local attacker in a guest VM could possibly use this to cause a denial of service (host system crash). (CVE-2020-27152)
It was discovered that the jfs file system implementation in the Linux kernel contained an out-of-bounds read vulnerability. A local attacker could use this to possibly cause a denial of service (system crash). (CVE-2020-27815)
It was discovered that an information leak existed in the syscall implementation in the Linux kernel on 32 bit systems. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2020-28588)
It was discovered that the framebuffer implementation in the Linux kernel did not properly perform range checks in certain situations. A local attacker could use this to expose sensitive information (kernel memory). A local attacker could use this to gain unintended write access to read-only memory pages. A local attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information. (CVE-2020-29369)
Jann Horn discovered that the romfs file system in the Linux kernel did not properly validate file system meta-data, leading to an out-of-bounds read. An attacker could use this to construct a malicious romfs image that, when mounted, exposed sensitive information (kernel memory). (CVE-2020-29371)
Jann Horn discovered that the tty subsystem of the Linux kernel did not use consistent locking in some situations, leading to a read-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information (kernel memory). (CVE-2020-29660)
Jann Horn discovered a race condition in the tty subsystem of the Linux kernel in the locking for the TIOCSPGRP ioctl(), leading to a use-after- free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-35508)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.6.0-1048-oem 5.6.0-1048.52 linux-image-oem-20.04 5.6.0.1048.44
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://usn.ubuntu.com/4752-1 CVE-2020-10135, CVE-2020-14314, CVE-2020-15436, CVE-2020-15437, CVE-2020-24490, CVE-2020-25212, CVE-2020-25284, CVE-2020-25641, CVE-2020-25643, CVE-2020-25704, CVE-2020-27152, CVE-2020-27815, CVE-2020-28588, CVE-2020-28915, CVE-2020-29368, CVE-2020-29369, CVE-2020-29371, CVE-2020-29660, CVE-2020-29661, CVE-2020-35508
Package Information: https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202103-0287",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h615c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "fas8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h610c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "a700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fas8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "aff a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"model": "red hat enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:5.12:rc1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:5.12:rc2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:5.12:rc3:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:5.12:rc4:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:5.12:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.12",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:a700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:a700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas8300_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas8300:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas8700_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas8700:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:aff_a400_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:aff_a400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h615c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h615c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "162837"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
}
],
"trust": 1.1
},
"cve": "CVE-2020-35508",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 4.4,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.4,
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:M/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Medium",
"accessVector": "Local",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Partial",
"baseScore": 4.4,
"confidentialityImpact": "Partial",
"exploitabilityScore": null,
"id": "CVE-2020-35508",
"impactScore": null,
"integrityImpact": "Partial",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.9,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:M/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 4.4,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.4,
"id": "VHN-377704",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:L/AC:M/AU:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "LOW",
"baseScore": 4.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 1.0,
"impactScore": 3.4,
"integrityImpact": "LOW",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L",
"version": "3.1"
},
{
"attackComplexity": "High",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "Low",
"baseScore": 4.5,
"baseSeverity": "Medium",
"confidentialityImpact": "Low",
"exploitabilityScore": null,
"id": "CVE-2020-35508",
"impactScore": null,
"integrityImpact": "Low",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2020-35508",
"trust": 1.8,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202102-1668",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-377704",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2020-35508",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "A flaw possibility of race condition and incorrect initialization of the process id was found in the Linux kernel child/parent process identification handling while filtering signal handlers. A local attacker is able to abuse this flaw to bypass checks to send any signal to a privileged process. Linux Kernel Contains an initialization vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: kernel-rt security and bug fix update\nAdvisory ID: RHSA-2021:1739-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:1739\nIssue date: 2021-05-18\nCVE Names: CVE-2019-19523 CVE-2019-19528 CVE-2020-0431\n CVE-2020-11608 CVE-2020-12114 CVE-2020-12362\n CVE-2020-12464 CVE-2020-14314 CVE-2020-14356\n CVE-2020-15437 CVE-2020-24394 CVE-2020-25212\n CVE-2020-25284 CVE-2020-25285 CVE-2020-25643\n CVE-2020-25704 CVE-2020-27786 CVE-2020-27835\n CVE-2020-28974 CVE-2020-35508 CVE-2021-0342\n====================================================================\n1. Summary:\n\nAn update for kernel-rt is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time (v. 8) - x86_64\nRed Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: Integer overflow in Intel(R) Graphics Drivers (CVE-2020-12362)\n\n* kernel: use-after-free caused by a malicious USB device in the\ndrivers/usb/misc/adutux.c driver (CVE-2019-19523)\n\n* kernel: use-after-free bug caused by a malicious USB device in the\ndrivers/usb/misc/iowarrior.c driver (CVE-2019-19528)\n\n* kernel: possible out of bounds write in kbd_keycode of keyboard.c\n(CVE-2020-0431)\n\n* kernel: DoS by corrupting mountpoint reference counter (CVE-2020-12114)\n\n* kernel: use-after-free in usb_sg_cancel function in\ndrivers/usb/core/message.c (CVE-2020-12464)\n\n* kernel: buffer uses out of index in ext3/4 filesystem (CVE-2020-14314)\n\n* kernel: Use After Free vulnerability in cgroup BPF component\n(CVE-2020-14356)\n\n* kernel: NULL pointer dereference in serial8250_isa_init_ports function in\ndrivers/tty/serial/8250/8250_core.c (CVE-2020-15437)\n\n* kernel: umask not applied on filesystem without ACL support\n(CVE-2020-24394)\n\n* kernel: TOCTOU mismatch in the NFS client code (CVE-2020-25212)\n\n* kernel: incomplete permission checking for access to rbd devices\n(CVE-2020-25284)\n\n* kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c\n(CVE-2020-25285)\n\n* kernel: improper input validation in ppp_cp_parse_cr function leads to\nmemory corruption and read overflow (CVE-2020-25643)\n\n* kernel: perf_event_parse_addr_filter memory (CVE-2020-25704)\n\n* kernel: use-after-free in kernel midi subsystem (CVE-2020-27786)\n\n* kernel: child process is able to access parent mm through hfi dev file\nhandle (CVE-2020-27835)\n\n* kernel: slab-out-of-bounds read in fbcon (CVE-2020-28974)\n\n* kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting\n- -\u003ereal_parent (CVE-2020-35508)\n\n* kernel: use after free in tun_get_user of tun.c could lead to local\nescalation of privilege (CVE-2021-0342)\n\n* kernel: NULL pointer dereferences in ov511_mode_init_regs and\nov518_mode_init_regs in drivers/media/usb/gspca/ov519.c (CVE-2020-11608)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.4 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1783434 - CVE-2019-19523 kernel: use-after-free caused by a malicious USB device in the drivers/usb/misc/adutux.c driver\n1783507 - CVE-2019-19528 kernel: use-after-free bug caused by a malicious USB device in the drivers/usb/misc/iowarrior.c driver\n1831726 - CVE-2020-12464 kernel: use-after-free in usb_sg_cancel function in drivers/usb/core/message.c\n1833445 - CVE-2020-11608 kernel: NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs in drivers/media/usb/gspca/ov519.c\n1848652 - CVE-2020-12114 kernel: DoS by corrupting mountpoint reference counter\n1853922 - CVE-2020-14314 kernel: buffer uses out of index in ext3/4 filesystem\n1868453 - CVE-2020-14356 kernel: Use After Free vulnerability in cgroup BPF component\n1869141 - CVE-2020-24394 kernel: umask not applied on filesystem without ACL support\n1877575 - CVE-2020-25212 kernel: TOCTOU mismatch in the NFS client code\n1879981 - CVE-2020-25643 kernel: improper input validation in ppp_cp_parse_cr function leads to memory corruption and read overflow\n1882591 - CVE-2020-25285 kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c\n1882594 - CVE-2020-25284 kernel: incomplete permission checking for access to rbd devices\n1886109 - BUG: using smp_processor_id() in preemptible [00000000] code: handler106/3082 [rhel-rt-8.4.0]\n1894793 - After configure hugepage and reboot test server, kernel got panic status. \n1895961 - CVE-2020-25704 kernel: perf_event_parse_addr_filter memory\n1896842 - host locks up when running stress-ng itimers on RT kernel. \n1897869 - Running oslat in RT guest, guest kernel shows Call Trace: INFO: task kcompactd0:35 blocked for more than 600 seconds. \n1900933 - CVE-2020-27786 kernel: use-after-free in kernel midi subsystem\n1901161 - CVE-2020-15437 kernel: NULL pointer dereference in serial8250_isa_init_ports function in drivers/tty/serial/8250/8250_core.c\n1901709 - CVE-2020-27835 kernel: child process is able to access parent mm through hfi dev file handle\n1902724 - CVE-2020-35508 kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting -\u003ereal_parent\n1903126 - CVE-2020-28974 kernel: slab-out-of-bounds read in fbcon\n1915799 - CVE-2021-0342 kernel: use after free in tun_get_user of tun.c could lead to local escalation of privilege\n1919889 - CVE-2020-0431 kernel: possible out of bounds write in kbd_keycode of keyboard.c\n1930246 - CVE-2020-12362 kernel: Integer overflow in Intel(R) Graphics Drivers\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8):\n\nSource:\nkernel-rt-4.18.0-305.rt7.72.el8.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\n\nRed Hat Enterprise Linux Real Time (v. 8):\n\nSource:\nkernel-rt-4.18.0-305.rt7.72.el8.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYKPwgNzjgjWX9erEAQiOVg//YfXIKUxc84y2aRexvrPHeTQvYkFMktq7\nNEhNhHqEZbDUabM5+eKb5hoyG44PmXvQuK1njYjEbpTjQss92U8fekGJZAR9Zbsl\nWEfVcu/ix/UJOzQj/lp+dKhirBSE/33xgBmSsQI6JQc+xn1AoZC8bOeSqyr7J6Y7\nt6I552Llhun9DDUGS8KYAM8PkrK3RGQybAS3S4atTdYd0qk42ZPF7/XqrbI7G4iq\n0Oe+ZePj6lN1O7pHV0WYUD2yzLTCZZopmz5847BLBEbGLqPyxlShZ+MFGsWxCOHk\ntW8lw/nqVt/MNlOXI1tD6P6iFZ6JQYrRU5mGFlvsl3t9NQW60MxmcUNPgtVknXW5\nBssBM/r6uLi0yFTTnDRZnv2MCs7fIzzqKXOHozrCvItswG6S8Qs72MaW2EQHAEen\nm7/fMKWTjt9CQudNCm/FwHLb8O9cYnOZwRiAINomo2B/Fi1b7WlquETSmjgQaQNr\nRxqtgiNQ98q92gnFgC8pCzxmiKRmHLFJEuxXYVq0O8Ch5i/eC8ExoO7Hqe6kYnJe\nZaST6fAtb2bMDcPdborfSIUmuDcYdKFtcEfCuuFZIbBxnL2aJDMw0zen/rmDNQyV\nlwwXoKanoP5EjKKFMc/zkeHlOInMzeHa/0DIlA9h3kpro5eGN0uOPZvsrlryjC+J\niJzkORGWplM\\xfb/D\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1328 - Port fix to 5.0.z for BZ-1945168\n\n6. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the latest RHEL-8.2.z10 Batch source\ntree (BZ#1968022)\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1886285 - CVE-2020-26541 kernel: security bypass in certs/blacklist.c and certs/system_keyring.c\n1895961 - CVE-2020-25704 kernel: perf_event_parse_addr_filter memory\n1902724 - CVE-2020-35508 kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting -\u003ereal_parent\n1961305 - CVE-2021-33034 kernel: use-after-free in net/bluetooth/hci_event.c when destroying an hci_chan\n1968022 - kernel-rt: update RT source tree to the latest RHEL-8.2.z10 Batch source tree\n1970273 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer\n\n6. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.13. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2122\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nThis update fixes the following bug among others:\n\n* Previously, resources for the ClusterOperator were being created early in\nthe update process, which led to update failures when the ClusterOperator\nhad no status condition while Operators were updating. This bug fix changes\nthe timing of when these resources are created. As a result, updates can\ntake place without errors. (BZ#1959238)\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-x86_64\n\nThe image digest is\nsha256:783a2c963f35ccab38e82e6a8c7fa954c3a4551e07d2f43c06098828dd986ed4\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-s390x\n\nThe image digest is\nsha256:4cf44e68413acad063203e1ee8982fd01d8b9c1f8643a5b31cd7ff341b3199cd\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-ppc64le\n\nThe image digest is\nsha256:d47ce972f87f14f1f3c5d50428d2255d1256dae3f45c938ace88547478643e36\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923268 - [Assisted-4.7] [Staging] Using two both spelling \"canceled\" \"cancelled\"\n1947216 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1953963 - Enable/Disable host operations returns cluster resource with incomplete hosts list\n1957749 - ovn-kubernetes pod should have CPU and memory requests set but not limits\n1959238 - CVO creating cloud-controller-manager too early causing upgrade failures\n1960103 - SR-IOV obliviously reboot the node\n1961941 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1962302 - packageserver clusteroperator does not set reason or message for Available condition\n1962312 - Deployment considered unhealthy despite being available and at latest generation\n1962435 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1963115 - Test verify /run filesystem contents failing\n\n5. ==========================================================================\nUbuntu Security Notice USN-4752-1\nFebruary 25, 2021\n\nlinux-oem-5.6 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-oem-5.6: Linux kernel for OEM systems\n\nDetails:\n\nDaniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen discovered\nthat legacy pairing and secure-connections pairing authentication in the\nBluetooth protocol could allow an unauthenticated user to complete\nauthentication without pairing credentials via adjacent access. A\nphysically proximate attacker could use this to impersonate a previously\npaired Bluetooth device. (CVE-2020-10135)\n\nJay Shin discovered that the ext4 file system implementation in the Linux\nkernel did not properly handle directory access with broken indexing,\nleading to an out-of-bounds read vulnerability. A local attacker could use\nthis to cause a denial of service (system crash). (CVE-2020-14314)\n\nIt was discovered that the block layer implementation in the Linux kernel\ndid not properly perform reference counting in some situations, leading to\na use-after-free vulnerability. A local attacker could use this to cause a\ndenial of service (system crash). (CVE-2020-15436)\n\nIt was discovered that the serial port driver in the Linux kernel did not\nproperly initialize a pointer in some situations. A local attacker could\npossibly use this to cause a denial of service (system crash). \n(CVE-2020-15437)\n\nAndy Nguyen discovered that the Bluetooth HCI event packet parser in the\nLinux kernel did not properly handle event advertisements of certain sizes,\nleading to a heap-based buffer overflow. A physically proximate remote\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2020-24490)\n\nIt was discovered that the NFS client implementation in the Linux kernel\ndid not properly perform bounds checking before copying security labels in\nsome situations. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2020-25212)\n\nIt was discovered that the Rados block device (rbd) driver in the Linux\nkernel did not properly perform privilege checks for access to rbd devices\nin some situations. A local attacker could use this to map or unmap rbd\nblock devices. (CVE-2020-25284)\n\nIt was discovered that the block layer subsystem in the Linux kernel did\nnot properly handle zero-length requests. A local attacker could use this\nto cause a denial of service. (CVE-2020-25641)\n\nIt was discovered that the HDLC PPP implementation in the Linux kernel did\nnot properly validate input in some situations. A local attacker could use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2020-25643)\n\nKiyin (\u5c39\u4eae) discovered that the perf subsystem in the Linux kernel did\nnot properly deallocate memory in some situations. A privileged attacker\ncould use this to cause a denial of service (kernel memory exhaustion). \n(CVE-2020-25704)\n\nIt was discovered that the KVM hypervisor in the Linux kernel did not\nproperly handle interrupts in certain situations. A local attacker in a\nguest VM could possibly use this to cause a denial of service (host system\ncrash). (CVE-2020-27152)\n\nIt was discovered that the jfs file system implementation in the Linux\nkernel contained an out-of-bounds read vulnerability. A local attacker\ncould use this to possibly cause a denial of service (system crash). \n(CVE-2020-27815)\n\nIt was discovered that an information leak existed in the syscall\nimplementation in the Linux kernel on 32 bit systems. A local attacker\ncould use this to expose sensitive information (kernel memory). \n(CVE-2020-28588)\n\nIt was discovered that the framebuffer implementation in the Linux kernel\ndid not properly perform range checks in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). A local attacker could use\nthis to gain unintended write access to read-only memory pages. A local attacker could use this to cause a\ndenial of service (system crash) or possibly expose sensitive information. \n(CVE-2020-29369)\n\nJann Horn discovered that the romfs file system in the Linux kernel did not\nproperly validate file system meta-data, leading to an out-of-bounds read. \nAn attacker could use this to construct a malicious romfs image that, when\nmounted, exposed sensitive information (kernel memory). (CVE-2020-29371)\n\nJann Horn discovered that the tty subsystem of the Linux kernel did not use\nconsistent locking in some situations, leading to a read-after-free\nvulnerability. A local attacker could use this to cause a denial of service\n(system crash) or possibly expose sensitive information (kernel memory). \n(CVE-2020-29660)\n\nJann Horn discovered a race condition in the tty subsystem of the Linux\nkernel in the locking for the TIOCSPGRP ioctl(), leading to a use-after-\nfree vulnerability. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. \n(CVE-2020-35508)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.6.0-1048-oem 5.6.0-1048.52\n linux-image-oem-20.04 5.6.0.1048.44\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://usn.ubuntu.com/4752-1\n CVE-2020-10135, CVE-2020-14314, CVE-2020-15436, CVE-2020-15437,\n CVE-2020-24490, CVE-2020-25212, CVE-2020-25284, CVE-2020-25641,\n CVE-2020-25643, CVE-2020-25704, CVE-2020-27152, CVE-2020-27815,\n CVE-2020-28588, CVE-2020-28915, CVE-2020-29368, CVE-2020-29369,\n CVE-2020-29371, CVE-2020-29660, CVE-2020-29661, CVE-2020-35508\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "162837"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
}
],
"trust": 2.43
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-35508",
"trust": 3.3
},
{
"db": "PACKETSTORM",
"id": "162626",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "161556",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163584",
"trust": 0.7
},
{
"db": "CS-HELP",
"id": "SB2021072252",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021122404",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0717",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1820",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1866",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2439",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1688",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "161555",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "162654",
"trust": 0.2
},
{
"db": "VULHUB",
"id": "VHN-377704",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2020-35508",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "162837",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "162877",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "162837"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"id": "VAR-202103-0287",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:38:27.231000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Linux\u00a0Kernel\u00a0Archives Red hat Red\u00a0Hat\u00a0Bugzilla",
"trust": 0.8,
"url": "https://github.com/torvalds/linux/commit/b4e00444cab4c3f3fec876dc0cccc8cbb0d1a948"
},
{
"title": "IBM: Security Bulletin: Vulnerabilities in the Linux Kernel, Samba, Sudo, Python, and tcmu-runner affect IBM Spectrum Protect Plus",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ddbe78143bb073890c2ecb87b35850bf"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-665",
"trust": 1.1
},
{
"problemtype": "Improper initialization (CWE-665) [ Other ]",
"trust": 0.8
},
{
"problemtype": "CWE-362",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.9,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35508"
},
{
"trust": 1.8,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1902724"
},
{
"trust": 1.8,
"url": "https://github.com/torvalds/linux/commit/b4e00444cab4c3f3fec876dc0cccc8cbb0d1a948"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20210513-0006/"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35508"
},
{
"trust": 0.7,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-the-linux-kernel-samba-sudo-python-and-tcmu-runner-affect-ibm-spectrum-protect-plus/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:1739"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:1578"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:2719"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/errata/rhsa-2021:2718"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072252"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0717"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-privilege-escalation-via-signal-sending-34683"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1866"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1688"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1820"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2439"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162626/red-hat-security-advisory-2021-1578-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163584/red-hat-security-advisory-2021-2719-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161556/ubuntu-security-notice-usn-4752-1.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021122404"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25704"
},
{
"trust": 0.5,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-25704"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12114"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-19528"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12464"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14314"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-19523"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12362"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0431"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-25285"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12114"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-25212"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19523"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-28974"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14356"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-27835"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-15437"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-25284"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-27786"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14314"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-25643"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11608"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-11608"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24394"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-0431"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-0342"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12464"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19528"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25212"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25643"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25284"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14356"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28974"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27835"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15437"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36322"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-18811"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18811"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24394"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0342"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25285"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27786"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14347"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8286"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-28196"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-25712"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13543"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9951"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-3842"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13776"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-24977"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8231"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-10878"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9948"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8285"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14363"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13584"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14360"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27619"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9983"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-2708"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14345"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14344"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23336"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14362"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-10543"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14346"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8284"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29660"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29661"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27815"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28588"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/665.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36322"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14346"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20305"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14345"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14347"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14360"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2136"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14344"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-u"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33909"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33909"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26541"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-006"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33034"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15586"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36242"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28935"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21645"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25659"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21643"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24331"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30465"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21644"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2122"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21642"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4752-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15436"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24490"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25641"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29369"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29371"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25656"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.8/5.8.0-44.50~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27777"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29568"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25668"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25669"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.8.0-1019.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.8.0-1023.24"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.8.0-1024.26"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.8.0-1016.19"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.8.0-1021.22"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27830"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.8.0-44.50"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29569"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4751-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.8.0-1023.25"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "162837"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "162837"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-03-26T00:00:00",
"db": "VULHUB",
"id": "VHN-377704"
},
{
"date": "2021-03-26T00:00:00",
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"date": "2021-12-02T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"date": "2021-05-19T14:06:16",
"db": "PACKETSTORM",
"id": "162654"
},
{
"date": "2021-05-19T13:56:20",
"db": "PACKETSTORM",
"id": "162626"
},
{
"date": "2021-05-27T13:28:54",
"db": "PACKETSTORM",
"id": "162837"
},
{
"date": "2021-07-21T16:02:50",
"db": "PACKETSTORM",
"id": "163584"
},
{
"date": "2021-06-01T14:45:29",
"db": "PACKETSTORM",
"id": "162877"
},
{
"date": "2021-02-25T15:31:12",
"db": "PACKETSTORM",
"id": "161556"
},
{
"date": "2021-02-25T15:31:02",
"db": "PACKETSTORM",
"id": "161555"
},
{
"date": "2021-02-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"date": "2021-03-26T17:15:12.203000",
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-12T00:00:00",
"db": "VULHUB",
"id": "VHN-377704"
},
{
"date": "2021-04-12T00:00:00",
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"date": "2021-12-02T09:13:00",
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"date": "2023-02-03T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"date": "2023-02-12T23:41:00.150000",
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
}
],
"trust": 0.8
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Linux\u00a0Kernel\u00a0 Initialization vulnerabilities",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
}
],
"trust": 0.6
}
}
VAR-202210-1070
Vulnerability from variot - Updated: 2024-07-23 21:36An issue was discovered in libxml2 before 2.10.3. Certain invalid XML entity definitions can corrupt a hash table key, potentially leading to subsequent logic errors. In one case, a double-free can be provoked. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. JIRA issues fixed (https://issues.jboss.org/):
OSSM-1330 - Allow specifying secret as pilot server cert when using CertificateAuthority: Custom OSSM-2342 - Run OSSM operator on infrastructure nodes OSSM-2371 - Kiali in read-only mode still can change the log level of the envoy proxies OSSM-2373 - Can't login to Kiali with "Error trying to get OAuth metadata" OSSM-2374 - Deleting a SMM also deletes the SMMR in OpenShift Service Mesh OSSM-2492 - Default tolerations in SMCP not passed to Jaeger OSSM-2493 - Default nodeSelector and tolerations in SMCP not passed to Kiali OSSM-3317 - Error: deployment.accessible_namespaces set to ['**']
- ========================================================================== Ubuntu Security Notice USN-5760-2 December 05, 2022
libxml2 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in libxml2.
Software Description: - libxml2: GNOME XML library
Details:
USN-5760-1 fixed vulnerabilities in libxml2. This update provides the corresponding updates for Ubuntu 14.04 ESM and Ubuntu 16.04 ESM.
Original advisory details:
It was discovered that libxml2 incorrectly handled certain XML files. An attacker could possibly use this issue to expose sensitive information or cause a crash. An attacker could possibly use this issue to execute arbitrary code. (CVE-2022-40304)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: libxml2 2.9.3+dfsg1-1ubuntu0.7+esm4 libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm4
Ubuntu 14.04 ESM: libxml2 2.9.1+dfsg1-3ubuntu4.13+esm4 libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm4
In general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update Advisory ID: RHSA-2022:8841-01 Product: Red Hat JBoss Core Services Advisory URL: https://access.redhat.com/errata/RHSA-2022:8841 Issue date: 2022-12-08 CVE Names: CVE-2022-1292 CVE-2022-2068 CVE-2022-22721 CVE-2022-23943 CVE-2022-26377 CVE-2022-28330 CVE-2022-28614 CVE-2022-28615 CVE-2022-30522 CVE-2022-31813 CVE-2022-32206 CVE-2022-32207 CVE-2022-32208 CVE-2022-32221 CVE-2022-35252 CVE-2022-37434 CVE-2022-40303 CVE-2022-40304 CVE-2022-40674 CVE-2022-42915 CVE-2022-42916 ==================================================================== 1. Summary:
An update is now available for Red Hat JBoss Core Services.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat JBoss Core Services is a set of supplementary software for Red Hat JBoss middleware products. This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release of Red Hat JBoss Core Services Apache HTTP Server 2.4.51 Service Pack 1 serves as a replacement for Red Hat JBoss Core Services Apache HTTP Server 2.4.51, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References.
Security Fix(es):
- libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)
- libxml2: dict corruption caused by entity reference cycles (CVE-2022-40304)
- expat: a use-after-free in the doContent function in xmlparse.c (CVE-2022-40674)
- zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field (CVE-2022-37434)
- curl: HSTS bypass via IDN (CVE-2022-42916)
- curl: HTTP proxy double-free (CVE-2022-42915)
- curl: POST following PUT confusion (CVE-2022-32221)
- httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism (CVE-2022-31813)
- httpd: mod_sed: DoS vulnerability (CVE-2022-30522)
- httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)
- httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)
- httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)
- curl: control code in cookie denial of service (CVE-2022-35252)
- zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field (CVE-2022-37434)
- jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)
- curl: Unpreserved file permissions (CVE-2022-32207)
- curl: various flaws (CVE-2022-32206 CVE-2022-32208)
- openssl: the c_rehash script allows command injection (CVE-2022-2068)
- openssl: c_rehash script allows command injection (CVE-2022-1292)
- jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody (CVE-2022-22721)
- jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds (CVE-2022-23943)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN 2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles
- References:
https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-22721 https://access.redhat.com/security/cve/CVE-2022-23943 https://access.redhat.com/security/cve/CVE-2022-26377 https://access.redhat.com/security/cve/CVE-2022-28330 https://access.redhat.com/security/cve/CVE-2022-28614 https://access.redhat.com/security/cve/CVE-2022-28615 https://access.redhat.com/security/cve/CVE-2022-30522 https://access.redhat.com/security/cve/CVE-2022-31813 https://access.redhat.com/security/cve/CVE-2022-32206 https://access.redhat.com/security/cve/CVE-2022-32207 https://access.redhat.com/security/cve/CVE-2022-32208 https://access.redhat.com/security/cve/CVE-2022-32221 https://access.redhat.com/security/cve/CVE-2022-35252 https://access.redhat.com/security/cve/CVE-2022-37434 https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/cve/CVE-2022-40674 https://access.redhat.com/security/cve/CVE-2022-42915 https://access.redhat.com/security/cve/CVE-2022-42916 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. JIRA issues fixed (https://issues.jboss.org/):
LOG-3533 - tls.cert, tls.key and passphrase are not passed to the fluentd configuration when forwarding logs using syslog over TLS LOG-3534 - [release-5.5] [Administrator Console] Seeing "parse error" while using Severity filter for cluster view user
- Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.
For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.
This advisory contains bug fixes and enhancements to the Submariner container images.
Security fixes:
- CVE-2022-32149 golang: golang.org/x/text/language: ParseAcceptLanguage takes a long time to parse complex tags
Bugs addressed:
- Build Submariner 0.13.3 (ACM-2226)
- Verify Submariner with OCP 4.12 (ACM-2435)
-
Submariner does not support cluster "kube-proxy ipvs mode" (ACM-2821)
-
Bugs fixed (https://bugzilla.redhat.com/):
2134010 - CVE-2022-32149 golang: golang.org/x/text/language: ParseAcceptLanguage takes a long time to parse complex tags
- JIRA issues fixed (https://issues.jboss.org/):
ACM-2226 - [ACM 2.6.4] Build Submariner 0.13.3 ACM-2435 - [ACM 2.6.4] Verify Submariner with OCP 4.12 ACM-2821 - [Submariner] - 0.13.3 - Submariner does not support cluster "kube-proxy ipvs mode"
- Description:
Security Fix(es):
-
archive/tar: unbounded memory consumption when reading headers (CVE-2022-2879)
-
regexp/syntax: limit memory used by parsing regexps (CVE-2022-41715)
-
net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page listed in the References section. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
OSPK8-664 - Unexpected "unassigned" hostRefs in OSBMS halt further reconcile loops
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-12-13-7 tvOS 16.2
tvOS 16.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213535.
Accounts Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A user may be able to view sensitive user information Description: This issue was addressed with improved data protection. CVE-2022-42843: Mickey Jin (@patch1t)
AppleAVD Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov
AppleMobileFileIntegrity Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by enabling hardened runtime. CVE-2022-42865: Wojciech Reguła (@_r3ggi) of SecuRing
AVEVideoEncoder Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved checks. CVE-2022-42848: ABC Research s.r.o
ImageIO Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46693: Mickey Jin (@patch1t)
ImageIO Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Parsing a maliciously crafted TIFF file may lead to disclosure of user information Description: The issue was addressed with improved memory handling. CVE-2022-42851: Mickey Jin (@patch1t)
IOHIDFamily Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)
IOMobileFrameBuffer Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46690: John Aakerblom (@jaakerblom)
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Connecting to a malicious NFS server may lead to arbitrary code execution with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2022-46701: Felix Poulin-Belanger
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A remote user may be able to cause kernel code execution Description: The issue was addressed with improved memory handling. CVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year Lab
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42845: Adam Doupé of ASU SEFCOM
libxml2 Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2022-40303: Maddie Stone of Google Project Zero
libxml2 Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project Zero
Preferences Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to use arbitrary entitlements Description: A logic issue was addressed with improved state management. CVE-2022-42855: Ivan Fratric of Google Project Zero
Safari Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. This issue was addressed with improved input validation. CVE-2022-46695: KirtiKumar Anandrao Ramchandani
Software Update Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A user may be able to elevate privileges Description: An access issue existed with privileged API calls. This issue was addressed with additional restrictions. CVE-2022-42849: Mickey Jin (@patch1t)
Weather Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to read sensitive location information Description: The issue was addressed with improved handling of caches. CVE-2022-42866: an anonymous researcher
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 245521 CVE-2022-42867: Maddie Stone of Google Project Zero
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 246942 CVE-2022-46696: Samuel Groß of Google V8 Security WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved checks. CVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs & DNSLab, Korea Univ.
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 247420 CVE-2022-46699: Samuel Groß of Google V8 Security WebKit Bugzilla: 244622 CVE-2022-42863: an anonymous researcher
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1. Description: A type confusion issue was addressed with improved state handling. WebKit Bugzilla: 248266 CVE-2022-42856: Clément Lecigne of Google's Threat Analysis Group
Additional recognition
Kernel We would like to acknowledge Zweig of Kunlun Lab for their assistance.
Safari Extensions We would like to acknowledge Oliver Dunk and Christian R. of 1Password for their assistance.
WebKit We would like to acknowledge an anonymous researcher and scarlet for their assistance.
Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software." To check the current version of software, select "Settings -> General -> About." All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke NxkItA/+LIwJ66Odl7Uwp1N/qek5Z/TBuPKlbgTwRZGT3LBVMVmyHTBzebA88aNq Pae1RKQ2Txw4w9Tb7a08eeqRQD51MBoSjTxf23tO1o0B1UR3Hgq3gsOSjh/dTq9V Jvy4DpO15xdVHP3BH/li114JpgR+FoD5Du0rPffL01p6YtqeWMSvnRoCmwNcIqou i2ZObfdrL2WJ+IiDIlMoJ3v+B1tDxOWR6Mn37iRdzl+QgrQMQtP9pSsiAPCntA+y eFM5Hp0JlOMtCfA+xT+LRoZHCbjTCFMRlRbNffGvrNwwdTY4MXrSYlKcIo3yFT2m KSHrQNvqzWhmSLAcHlUNo0lVvtPAlrgyilCYaeRNgRC1+x8KRf/AcErXr23oKknJ lzIF6eVk1K3mxUmR+M+P8+cr14pbrUwJcQlm0In6/8fUulHtcElLE3fJ+HJVImx8 RtvNmuCng5iEK1zlwgDvAKO3EgMrMtduF8aygaCcBmt65GMkHwvOGCDXcIrKfH9U sP4eY7V3t4CQd9TX3Vlmt47MwRTSVuUtMcQeQPhEUTdUbM7UlvtW8igrLvkz9uPn CpuE2mzhd/dJANXvMFBR9A0ilAdJO1QD/uSWL+UbKq4BlyiW5etd8gObQfHqqW3C sh0EwxLh4ATicRS9btAJMwIfK/ulYDWp4yuIsUamDj/sN9xWvXY= =i2O9 -----END PGP SIGNATURE-----
. Bugs fixed (https://bugzilla.redhat.com/):
2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents 2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202210-1070",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "libxml2",
"scope": "lt",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.10.3"
},
{
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"model": "manageability software development kit",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "9.2"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.2"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "tvos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "16.2"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.2"
},
{
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "2.10.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:manageability_software_development_kit:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:hyper-v:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.7.2",
"versionStartIncluding": "11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "16.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.7.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.7.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.6.2",
"versionStartIncluding": "12.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "171470"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "171017"
},
{
"db": "PACKETSTORM",
"id": "171026"
},
{
"db": "PACKETSTORM",
"id": "171260"
},
{
"db": "PACKETSTORM",
"id": "171318"
},
{
"db": "PACKETSTORM",
"id": "171043"
}
],
"trust": 0.7
},
"cve": "CVE-2022-40304",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-40304",
"trust": 1.0,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "An issue was discovered in libxml2 before 2.10.3. Certain invalid XML entity definitions can corrupt a hash table key, potentially leading to subsequent logic errors. In one case, a double-free can be provoked. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. JIRA issues fixed (https://issues.jboss.org/):\n\nOSSM-1330 - Allow specifying secret as pilot server cert when using CertificateAuthority: Custom\nOSSM-2342 - Run OSSM operator on infrastructure nodes\nOSSM-2371 - Kiali in read-only mode still can change the log level of the envoy proxies\nOSSM-2373 - Can\u0027t login to Kiali with \"Error trying to get OAuth metadata\"\nOSSM-2374 - Deleting a SMM also deletes the SMMR in OpenShift Service Mesh\nOSSM-2492 - Default tolerations in SMCP not passed to Jaeger\nOSSM-2493 - Default nodeSelector and tolerations in SMCP not passed to Kiali\nOSSM-3317 - Error: deployment.accessible_namespaces set to [\u0027**\u0027]\n\n6. ==========================================================================\nUbuntu Security Notice USN-5760-2\nDecember 05, 2022\n\nlibxml2 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in libxml2. \n\nSoftware Description:\n- libxml2: GNOME XML library\n\nDetails:\n\nUSN-5760-1 fixed vulnerabilities in libxml2. This update provides the\ncorresponding updates for Ubuntu 14.04 ESM and Ubuntu 16.04 ESM. \n\nOriginal advisory details:\n\n It was discovered that libxml2 incorrectly handled certain XML files. \n An attacker could possibly use this issue to expose sensitive information\n or cause a crash. \n An attacker could possibly use this issue to execute arbitrary code. \n (CVE-2022-40304)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n libxml2 2.9.3+dfsg1-1ubuntu0.7+esm4\n libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm4\n\nUbuntu 14.04 ESM:\n libxml2 2.9.1+dfsg1-3ubuntu4.13+esm4\n libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm4\n\nIn general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update\nAdvisory ID: RHSA-2022:8841-01\nProduct: Red Hat JBoss Core Services\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8841\nIssue date: 2022-12-08\nCVE Names: CVE-2022-1292 CVE-2022-2068 CVE-2022-22721\n CVE-2022-23943 CVE-2022-26377 CVE-2022-28330\n CVE-2022-28614 CVE-2022-28615 CVE-2022-30522\n CVE-2022-31813 CVE-2022-32206 CVE-2022-32207\n CVE-2022-32208 CVE-2022-32221 CVE-2022-35252\n CVE-2022-37434 CVE-2022-40303 CVE-2022-40304\n CVE-2022-40674 CVE-2022-42915 CVE-2022-42916\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat JBoss Core Services. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat JBoss Core Services is a set of supplementary software for Red Hat\nJBoss middleware products. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release of Red Hat JBoss Core Services Apache HTTP Server 2.4.51\nService Pack 1 serves as a replacement for Red Hat JBoss Core Services\nApache HTTP Server 2.4.51, and includes bug fixes and enhancements, which\nare documented in the Release Notes document linked to in the References. \n\nSecurity Fix(es):\n\n* libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)\n* libxml2: dict corruption caused by entity reference cycles\n(CVE-2022-40304)\n* expat: a use-after-free in the doContent function in xmlparse.c\n(CVE-2022-40674)\n* zlib: a heap-based buffer over-read or buffer overflow in inflate in\ninflate.c via a large gzip header extra field (CVE-2022-37434)\n* curl: HSTS bypass via IDN (CVE-2022-42916)\n* curl: HTTP proxy double-free (CVE-2022-42915)\n* curl: POST following PUT confusion (CVE-2022-32221)\n* httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n(CVE-2022-31813)\n* httpd: mod_sed: DoS vulnerability (CVE-2022-30522)\n* httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)\n* httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)\n* httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)\n* curl: control code in cookie denial of service (CVE-2022-35252)\n* zlib: a heap-based buffer over-read or buffer overflow in inflate in\ninflate.c via a large gzip header extra field (CVE-2022-37434)\n* jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)\n* curl: Unpreserved file permissions (CVE-2022-32207)\n* curl: various flaws (CVE-2022-32206 CVE-2022-32208)\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n* jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large\nor unlimited LimitXMLRequestBody (CVE-2022-22721)\n* jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds\n(CVE-2022-23943)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-22721\nhttps://access.redhat.com/security/cve/CVE-2022-23943\nhttps://access.redhat.com/security/cve/CVE-2022-26377\nhttps://access.redhat.com/security/cve/CVE-2022-28330\nhttps://access.redhat.com/security/cve/CVE-2022-28614\nhttps://access.redhat.com/security/cve/CVE-2022-28615\nhttps://access.redhat.com/security/cve/CVE-2022-30522\nhttps://access.redhat.com/security/cve/CVE-2022-31813\nhttps://access.redhat.com/security/cve/CVE-2022-32206\nhttps://access.redhat.com/security/cve/CVE-2022-32207\nhttps://access.redhat.com/security/cve/CVE-2022-32208\nhttps://access.redhat.com/security/cve/CVE-2022-32221\nhttps://access.redhat.com/security/cve/CVE-2022-35252\nhttps://access.redhat.com/security/cve/CVE-2022-37434\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/cve/CVE-2022-40674\nhttps://access.redhat.com/security/cve/CVE-2022-42915\nhttps://access.redhat.com/security/cve/CVE-2022-42916\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3533 - tls.cert, tls.key and passphrase are not passed to the fluentd configuration when forwarding logs using syslog over TLS\nLOG-3534 - [release-5.5] [Administrator Console] Seeing \"parse error\" while using Severity filter for cluster view user\n\n5. Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nThis advisory contains bug fixes and enhancements to the Submariner\ncontainer images. \n\nSecurity fixes:\n\n* CVE-2022-32149 golang: golang.org/x/text/language: ParseAcceptLanguage\ntakes a long time to parse complex tags\n\nBugs addressed:\n\n* Build Submariner 0.13.3 (ACM-2226)\n* Verify Submariner with OCP 4.12 (ACM-2435)\n* Submariner does not support cluster \"kube-proxy ipvs mode\" (ACM-2821)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2134010 - CVE-2022-32149 golang: golang.org/x/text/language: ParseAcceptLanguage takes a long time to parse complex tags\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nACM-2226 - [ACM 2.6.4] Build Submariner 0.13.3\nACM-2435 - [ACM 2.6.4] Verify Submariner with OCP 4.12\nACM-2821 - [Submariner] - 0.13.3 - Submariner does not support cluster \"kube-proxy ipvs mode\"\n\n6. Description:\n\nSecurity Fix(es):\n\n* archive/tar: unbounded memory consumption when reading headers\n(CVE-2022-2879)\n\n* regexp/syntax: limit memory used by parsing regexps (CVE-2022-41715)\n\n* net/http: An attacker can cause excessive memory growth in a Go server\naccepting HTTP/2 requests (CVE-2022-41717)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage listed in the References section. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOSPK8-664 - Unexpected \"unassigned\" hostRefs in OSBMS halt further reconcile loops\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-7 tvOS 16.2\n\ntvOS 16.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213535. \n\nAccounts\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A user may be able to view sensitive user information\nDescription: This issue was addressed with improved data protection. \nCVE-2022-42843: Mickey Jin (@patch1t)\n\nAppleAVD\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAppleMobileFileIntegrity\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2022-42865: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nAVEVideoEncoder\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-42848: ABC Research s.r.o\n\nImageIO\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46693: Mickey Jin (@patch1t)\n\nImageIO\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Parsing a maliciously crafted TIFF file may lead to\ndisclosure of user information\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42851: Mickey Jin (@patch1t)\n\nIOHIDFamily\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\nIOMobileFrameBuffer\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46690: John Aakerblom (@jaakerblom)\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Connecting to a malicious NFS server may lead to arbitrary\ncode execution with kernel privileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-46701: Felix Poulin-Belanger\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A remote user may be able to cause kernel code execution\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42845: Adam Doup\u00e9 of ASU SEFCOM\n\nlibxml2\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2022-40303: Maddie Stone of Google Project Zero\n\nlibxml2\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project\nZero\n\nPreferences\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to use arbitrary entitlements\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-42855: Ivan Fratric of Google Project Zero\n\nSafari\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. This\nissue was addressed with improved input validation. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nSoftware Update\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A user may be able to elevate privileges\nDescription: An access issue existed with privileged API calls. This\nissue was addressed with additional restrictions. \nCVE-2022-42849: Mickey Jin (@patch1t)\n\nWeather\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to read sensitive location information\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-42866: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 245521\nCVE-2022-42867: Maddie Stone of Google Project Zero\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 246942\nCVE-2022-46696: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs\n\u0026 DNSLab, Korea Univ. \n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 247420\nCVE-2022-46699: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 244622\nCVE-2022-42863: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution. Apple is aware of a report that this issue\nmay have been actively exploited against versions of iOS released\nbefore iOS 15.1. \nDescription: A type confusion issue was addressed with improved state\nhandling. \nWebKit Bugzilla: 248266\nCVE-2022-42856: Cl\u00e9ment Lecigne of Google\u0027s Threat Analysis Group\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Zweig of Kunlun Lab for their\nassistance. \n\nSafari Extensions\nWe would like to acknowledge Oliver Dunk and Christian R. of\n1Password for their assistance. \n\nWebKit\nWe would like to acknowledge an anonymous researcher and scarlet for\ntheir assistance. \n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting \"Settings -\u003e\nSystem -\u003e Software Update -\u003e Update Software.\" To check the current\nversion of software, select \"Settings -\u003e General -\u003e About.\"\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke\nNxkItA/+LIwJ66Odl7Uwp1N/qek5Z/TBuPKlbgTwRZGT3LBVMVmyHTBzebA88aNq\nPae1RKQ2Txw4w9Tb7a08eeqRQD51MBoSjTxf23tO1o0B1UR3Hgq3gsOSjh/dTq9V\nJvy4DpO15xdVHP3BH/li114JpgR+FoD5Du0rPffL01p6YtqeWMSvnRoCmwNcIqou\ni2ZObfdrL2WJ+IiDIlMoJ3v+B1tDxOWR6Mn37iRdzl+QgrQMQtP9pSsiAPCntA+y\neFM5Hp0JlOMtCfA+xT+LRoZHCbjTCFMRlRbNffGvrNwwdTY4MXrSYlKcIo3yFT2m\nKSHrQNvqzWhmSLAcHlUNo0lVvtPAlrgyilCYaeRNgRC1+x8KRf/AcErXr23oKknJ\nlzIF6eVk1K3mxUmR+M+P8+cr14pbrUwJcQlm0In6/8fUulHtcElLE3fJ+HJVImx8\nRtvNmuCng5iEK1zlwgDvAKO3EgMrMtduF8aygaCcBmt65GMkHwvOGCDXcIrKfH9U\nsP4eY7V3t4CQd9TX3Vlmt47MwRTSVuUtMcQeQPhEUTdUbM7UlvtW8igrLvkz9uPn\nCpuE2mzhd/dJANXvMFBR9A0ilAdJO1QD/uSWL+UbKq4BlyiW5etd8gObQfHqqW3C\nsh0EwxLh4ATicRS9btAJMwIfK/ulYDWp4yuIsUamDj/sN9xWvXY=\n=i2O9\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents\n2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40304"
},
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "PACKETSTORM",
"id": "171470"
},
{
"db": "PACKETSTORM",
"id": "170097"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "171017"
},
{
"db": "PACKETSTORM",
"id": "171026"
},
{
"db": "PACKETSTORM",
"id": "171260"
},
{
"db": "PACKETSTORM",
"id": "171318"
},
{
"db": "PACKETSTORM",
"id": "169858"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
}
],
"trust": 1.89
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-429438",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-40304",
"trust": 2.1
},
{
"db": "PACKETSTORM",
"id": "170317",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171043",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169858",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170097",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171017",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171260",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169824",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170316",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170753",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171016",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169857",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170318",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170555",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171173",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170752",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169620",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170899",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170312",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170955",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169732",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171042",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170754",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170315",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171040",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1022",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-429438",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171470",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170165",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171026",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171318",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "PACKETSTORM",
"id": "171470"
},
{
"db": "PACKETSTORM",
"id": "170097"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "171017"
},
{
"db": "PACKETSTORM",
"id": "171026"
},
{
"db": "PACKETSTORM",
"id": "171260"
},
{
"db": "PACKETSTORM",
"id": "171318"
},
{
"db": "PACKETSTORM",
"id": "169858"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"id": "VAR-202210-1070",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:36:19.928000Z",
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-415",
"trust": 1.0
},
{
"problemtype": "CWE-611",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20221209-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213531"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213533"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213534"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213535"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213536"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/21"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/24"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/25"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/26"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/27"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/commit/1b41ec4e9433b05bb0376be4725804c54ef1d80b"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/tags"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/tags/v2.10.3"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-35737"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-47629"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41717"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-4415"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35737"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-4415"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-41717"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42010"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-43680"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42011"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-48303"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23521"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-41903"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41903"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23521"
},
{
"trust": 0.2,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.2,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-45061"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1448"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28861"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10735"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42011"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28861"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43680"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42010"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-45061"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10735"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42012"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5760-2"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5760-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8841"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3821"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2601"
},
{
"trust": 0.1,
"url": "https://submariner.io/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://submariner.io/getting-started/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32149"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3775"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0795"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/add-ons/add-ons-overview#submariner-deploy-console"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41974"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2879"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1079"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41715"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-48303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1181"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213504."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42849"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42848"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42842"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42855"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42845"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42865"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42851"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42843"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42856"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213535."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4238"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3064"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3064"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0803"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "PACKETSTORM",
"id": "171470"
},
{
"db": "PACKETSTORM",
"id": "170097"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "171017"
},
{
"db": "PACKETSTORM",
"id": "171026"
},
{
"db": "PACKETSTORM",
"id": "171260"
},
{
"db": "PACKETSTORM",
"id": "171318"
},
{
"db": "PACKETSTORM",
"id": "169858"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "PACKETSTORM",
"id": "171470"
},
{
"db": "PACKETSTORM",
"id": "170097"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "171017"
},
{
"db": "PACKETSTORM",
"id": "171026"
},
{
"db": "PACKETSTORM",
"id": "171260"
},
{
"db": "PACKETSTORM",
"id": "171318"
},
{
"db": "PACKETSTORM",
"id": "169858"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-11-23T00:00:00",
"db": "VULHUB",
"id": "VHN-429438"
},
{
"date": "2023-03-24T16:45:17",
"db": "PACKETSTORM",
"id": "171470"
},
{
"date": "2022-12-05T15:18:44",
"db": "PACKETSTORM",
"id": "170097"
},
{
"date": "2022-12-08T21:28:21",
"db": "PACKETSTORM",
"id": "170165"
},
{
"date": "2023-02-16T15:42:01",
"db": "PACKETSTORM",
"id": "171017"
},
{
"date": "2023-02-16T15:45:25",
"db": "PACKETSTORM",
"id": "171026"
},
{
"date": "2023-03-07T19:04:22",
"db": "PACKETSTORM",
"id": "171260"
},
{
"date": "2023-03-10T14:24:58",
"db": "PACKETSTORM",
"id": "171318"
},
{
"date": "2022-11-15T16:42:35",
"db": "PACKETSTORM",
"id": "169858"
},
{
"date": "2022-12-22T02:12:53",
"db": "PACKETSTORM",
"id": "170317"
},
{
"date": "2023-02-17T16:07:29",
"db": "PACKETSTORM",
"id": "171043"
},
{
"date": "2022-11-23T18:15:12.167000",
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-23T00:00:00",
"db": "VULHUB",
"id": "VHN-429438"
},
{
"date": "2023-11-07T03:52:15.353000",
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2023-1448-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "171470"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "overflow, code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "169858"
}
],
"trust": 0.2
}
}
VAR-202004-0530
Vulnerability from variot - Updated: 2024-07-23 21:32In filter.c in slapd in OpenLDAP before 2.4.50, LDAP search filters with nested boolean expressions can result in denial of service (daemon crash). OpenLDAP Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be put into a state. The filter.c file of slapd in versions earlier than OpenLDAP 2.4.50 has a security vulnerability.
Bug Fix(es):
-
Gather image registry config (backport to 4.3) (BZ#1836815)
-
Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist (BZ#1849176)
-
Login with OpenShift not working after cluster upgrade (BZ#1852429)
-
Limit the size of gathered federated metrics from alerts in Insights Operator (BZ#1874018)
-
[4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs (BZ#1879110)
-
[release 4.3] OpenShift APIs become unavailable for more than 15 minutes after one of master nodes went down(OAuth) (BZ#1880293)
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-x86_64
The image digest is sha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-s390x The image digest is sha256:605ddde0442e604cfe2d6bd1541ce48df5956fe626edf9cc95b1fca75d231b64
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-ppc64le
The image digest is sha256:d3c9e391c145338eae3feb7f6a4e487dadc8139a353117d642fe686d277bcccc
- Solution:
For OpenShift Container Platform 4.3 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.3/updating/updating-cluster - -cli.html. Bugs fixed (https://bugzilla.redhat.com/):
1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1836815 - Gather image registry config (backport to 4.3) 1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist 1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator 1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized 1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs
Ansible Automation Platform manages Ansible Platform jobs and workflows that can interface with any infrastructure on a Red Hat OpenShift Container Platform cluster, or on a traditional infrastructure that is running off-cluster. Bugs fixed (https://bugzilla.redhat.com/):
1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module 1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values 1916813 - CVE-2021-20191 ansible: multiple modules expose secured values 1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option 1939349 - CVE-2021-3447 ansible: multiple modules expose secured values
- Description:
Red Hat 3scale API Management delivers centralized API management features through a distributed, cloud-hosted layer. It includes built-in features to help in building a more successful API program, including access control, rate limits, payment gateway integration, and developer experience tools.
This advisory is intended to use with container images for Red Hat 3scale API Management 2.10.0. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash
- Description:
Red Hat OpenShift Do (odo) is a simple CLI tool for developers to create, build, and deploy applications on OpenShift. The odo tool is completely client-based and requires no server within the OpenShift cluster for deployment. It detects changes to local code and deploys it to the cluster automatically, giving instant feedback to validate changes in real-time. It supports multiple programming languages and frameworks.
The advisory addresses the following issues:
-
Re-release of odo-init-image 1.1.3 for security updates
-
Solution:
Download and install a new CLI binary by following the instructions linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
1832983 - Release of 1.1.3 odo-init-image
- ========================================================================== Ubuntu Security Notice USN-4352-1 May 06, 2020
openldap vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
- Ubuntu 19.10
- Ubuntu 18.04 LTS
- Ubuntu 16.04 LTS
Summary:
OpenLDAP could be made to crash if it received specially crafted network traffic. A remote attacker could possibly use this issue to cause OpenLDAP to consume resources, resulting in a denial of service.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: slapd 2.4.49+dfsg-2ubuntu1.2
Ubuntu 19.10: slapd 2.4.48+dfsg-1ubuntu1.1
Ubuntu 18.04 LTS: slapd 2.4.45+dfsg-1ubuntu1.5
Ubuntu 16.04 LTS: slapd 2.4.42+dfsg-2ubuntu3.8
In general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: openldap security update Advisory ID: RHSA-2020:4041-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:4041 Issue date: 2020-09-29 CVE Names: CVE-2020-12243 ==================================================================== 1. Summary:
An update for openldap is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
OpenLDAP is an open-source suite of Lightweight Directory Access Protocol (LDAP) applications and development tools. LDAP is a set of protocols used to access and maintain distributed directory information services over an IP network. The openldap packages contain configuration files, libraries, and documentation for OpenLDAP.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: openldap-2.4.44-22.el7.src.rpm
x86_64: openldap-2.4.44-22.el7.i686.rpm openldap-2.4.44-22.el7.x86_64.rpm openldap-clients-2.4.44-22.el7.x86_64.rpm openldap-debuginfo-2.4.44-22.el7.i686.rpm openldap-debuginfo-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: openldap-debuginfo-2.4.44-22.el7.i686.rpm openldap-debuginfo-2.4.44-22.el7.x86_64.rpm openldap-devel-2.4.44-22.el7.i686.rpm openldap-devel-2.4.44-22.el7.x86_64.rpm openldap-servers-2.4.44-22.el7.x86_64.rpm openldap-servers-sql-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: openldap-2.4.44-22.el7.src.rpm
x86_64: openldap-2.4.44-22.el7.i686.rpm openldap-2.4.44-22.el7.x86_64.rpm openldap-clients-2.4.44-22.el7.x86_64.rpm openldap-debuginfo-2.4.44-22.el7.i686.rpm openldap-debuginfo-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: openldap-debuginfo-2.4.44-22.el7.i686.rpm openldap-debuginfo-2.4.44-22.el7.x86_64.rpm openldap-devel-2.4.44-22.el7.i686.rpm openldap-devel-2.4.44-22.el7.x86_64.rpm openldap-servers-2.4.44-22.el7.x86_64.rpm openldap-servers-sql-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: openldap-2.4.44-22.el7.src.rpm
ppc64: openldap-2.4.44-22.el7.ppc.rpm openldap-2.4.44-22.el7.ppc64.rpm openldap-clients-2.4.44-22.el7.ppc64.rpm openldap-debuginfo-2.4.44-22.el7.ppc.rpm openldap-debuginfo-2.4.44-22.el7.ppc64.rpm openldap-devel-2.4.44-22.el7.ppc.rpm openldap-devel-2.4.44-22.el7.ppc64.rpm openldap-servers-2.4.44-22.el7.ppc64.rpm
ppc64le: openldap-2.4.44-22.el7.ppc64le.rpm openldap-clients-2.4.44-22.el7.ppc64le.rpm openldap-debuginfo-2.4.44-22.el7.ppc64le.rpm openldap-devel-2.4.44-22.el7.ppc64le.rpm openldap-servers-2.4.44-22.el7.ppc64le.rpm
s390x: openldap-2.4.44-22.el7.s390.rpm openldap-2.4.44-22.el7.s390x.rpm openldap-clients-2.4.44-22.el7.s390x.rpm openldap-debuginfo-2.4.44-22.el7.s390.rpm openldap-debuginfo-2.4.44-22.el7.s390x.rpm openldap-devel-2.4.44-22.el7.s390.rpm openldap-devel-2.4.44-22.el7.s390x.rpm openldap-servers-2.4.44-22.el7.s390x.rpm
x86_64: openldap-2.4.44-22.el7.i686.rpm openldap-2.4.44-22.el7.x86_64.rpm openldap-clients-2.4.44-22.el7.x86_64.rpm openldap-debuginfo-2.4.44-22.el7.i686.rpm openldap-debuginfo-2.4.44-22.el7.x86_64.rpm openldap-devel-2.4.44-22.el7.i686.rpm openldap-devel-2.4.44-22.el7.x86_64.rpm openldap-servers-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: openldap-debuginfo-2.4.44-22.el7.ppc64.rpm openldap-servers-sql-2.4.44-22.el7.ppc64.rpm
ppc64le: openldap-debuginfo-2.4.44-22.el7.ppc64le.rpm openldap-servers-sql-2.4.44-22.el7.ppc64le.rpm
s390x: openldap-debuginfo-2.4.44-22.el7.s390x.rpm openldap-servers-sql-2.4.44-22.el7.s390x.rpm
x86_64: openldap-debuginfo-2.4.44-22.el7.x86_64.rpm openldap-servers-sql-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: openldap-2.4.44-22.el7.src.rpm
x86_64: openldap-2.4.44-22.el7.i686.rpm openldap-2.4.44-22.el7.x86_64.rpm openldap-clients-2.4.44-22.el7.x86_64.rpm openldap-debuginfo-2.4.44-22.el7.i686.rpm openldap-debuginfo-2.4.44-22.el7.x86_64.rpm openldap-devel-2.4.44-22.el7.i686.rpm openldap-devel-2.4.44-22.el7.x86_64.rpm openldap-servers-2.4.44-22.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: openldap-debuginfo-2.4.44-22.el7.x86_64.rpm openldap-servers-sql-2.4.44-22.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-12243 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBX3Of5NzjgjWX9erEAQjUBg/+LuTU5msGMECYNN1kTZeKEOLCX9BedipK jEUYzVTDdrrVglmfre4vnt8I5vLaVHoWD9Azv/0T7C7PqoDQTa+DuXgmUJ0gST8u MVhEsiDzTb2JPEPT0G5Mn/S7bL5buthYDlHJxTlnPimuvYBYIRRnP/65Kw0KnKyH Jd0lheTvX0I6MbH+vArqU6LHeX21tvfPHlqfPWz3adCvqk7T0mKTM2N2qbeaeyMk NPkqy4L/79s897+76c8PaS9VNIC+zTq78V24n/VXE29tYr6lz5AI/PsyqqAg9u2W RwfngfaX47EBTWo5z+Wm3q+Jr2zpv2zEBOu0yxl/PUH0Knk2S5pu1u7Ou7jDC3ty 4mCWo50wLOjkXspYQ1TWBhlGTe2fTVhH3l5emSR2z7y8bOKXR+GTS16uJ/un/Plr 0AU3pnJNPTtEYGzvNRNrw2IFsN3TAnhZnve0LerryIsyc/3tz6UhdeLKCw5lScYl ljGRanFYnwLL9+/h0CgjudrjtkB7F0SYNwiuSvr4yeAGG+/B6KFvtdii99azWhKf BqT1maqEizgtGaWIenkEMHYWHReC79Q+0DC9cyZGe5NJlndXZP0i1IkzL6wOLbAS DFqkF35KUgcQFh+kyPblKhX3HK3ZtBEFTeoV6rEQsgV8bU9HqFd1rjt/805/rIjk ZiAkpTmTglI=6TQF -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce .
For the oldstable distribution (stretch), this problem has been fixed in version 2.4.44+dfsg-5+deb9u4.
For the stable distribution (buster), this problem has been fixed in version 2.4.47+dfsg-3+deb10u2.
For the detailed security status of openldap please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openldap
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAl6ofsxfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0Qx4Q//dOnPiP6bKHrFUFtyv59tV5Zpa1jJ6BmIr3/5ueODnBu8MHLJw8503zLJ I43LDTzvGkXrxy0Y28YC5Qpv1oHW3gvPzFsTrn2DObeUnHlKOOUsyzz3saHXyyzQ ki+2UGsUXydSazDMeJzcoMfRdVpCtjc+GNTb/y7nxgwoKrz/WJplGstp2ibd8ftv Ju4uT8VJZcC3IEGhkYXJ7TENlegOK2FCewYMZARrNT/tjIDyAqfKi2muCg7oadx/ 5WZGLW7Pdw25jFknVy/Y7fEyJDWQdPH7NchK5tZy6D1lWQh67GcvJFSo5HICwb+n FilP29mIBbS96JQq6u5jWWMpAD6RPCtIltak4QdYptjdrQnTDFy3RJSTdZeis8ty HKwYJgNzVG6SCy04t3D+zeMbgEZOvj6GWrURQUqZJQmc4V9l89E0/D7zV3AX9Q9v 0hKEtpc//bZrS71QVqJvkWvrgfutB72Vnqfull+DBxvt33ma5W2il6kxGMwJK3S9 0lk60dzEDCdYp8TE61y8N4z+2IB/Otg9Ni2I8pmaE5s1/ZUva+8GhSjbmGyIhbpk p55kTiZUgpmu6EK2Kvjkh9rMlaa1IHXL8tdrbo8pRVtQHlA8/HUgoGiUHuX1h+Kw LZVjIV/L4qOFQ54uMbSscZgMEvhfW00fe3o2zI8WQZ9IPCQ3oRg= =K3JD -----END PGP SIGNATURE-----
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202004-0530",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openldap",
"scope": "lt",
"trust": 1.0,
"vendor": "openldap",
"version": "2.4.50"
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.14.6"
},
{
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.15"
},
{
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.13.6"
},
{
"model": "solaris",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "10"
},
{
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.14.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "8.0"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "20.04"
},
{
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.13.0"
},
{
"model": "zfs storage appliance kit",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.8"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "12.04"
},
{
"model": "solaris",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.14.6"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.6"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.10"
},
{
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.13.6"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "gnu/linux",
"scope": null,
"trust": 0.8,
"vendor": "debian",
"version": null
},
{
"model": "openldap",
"scope": "eq",
"trust": 0.8,
"vendor": "openldap",
"version": "2.4.50"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:openldap:openldap:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "2.4.50",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:12.04:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:19.10:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:20.04:*:*:*:lts:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.13.6",
"versionStartIncluding": "10.13.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2018-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2018-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-004:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-005:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-006:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2019-007:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2020-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2020-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:security_update_2020-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.13.6:supplemental_update:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.14.6",
"versionStartIncluding": "10.14.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2019-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2019-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2019-004:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2019-005:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2019-006:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2019-007:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-004:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-005:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-006:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2020-007:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2021-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2021-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:security_update_2021-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:supplemental_update:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.14.6:supplemental_update_2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.15.6",
"versionStartIncluding": "10.15",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:oracle:solaris:10:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:oracle:solaris:11:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "159347"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
],
"trust": 1.1
},
"cve": "CVE-2020-12243",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Partial",
"baseScore": 5.0,
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "JVNDB-2020-005084",
"impactScore": null,
"integrityImpact": "None",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.8,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-164902",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.5,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "JVNDB-2020-005084",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2020-12243",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "JVNDB-2020-005084",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202004-2326",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-164902",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "In filter.c in slapd in OpenLDAP before 2.4.50, LDAP search filters with nested boolean expressions can result in denial of service (daemon crash). OpenLDAP Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be put into a state. The filter.c file of slapd in versions earlier than OpenLDAP 2.4.50 has a security vulnerability. \n\nBug Fix(es):\n\n* Gather image registry config (backport to 4.3) (BZ#1836815)\n\n* Builds fail after running postCommit script if OCP cluster is configured\nwith a container registry whitelist (BZ#1849176)\n\n* Login with OpenShift not working after cluster upgrade (BZ#1852429)\n\n* Limit the size of gathered federated metrics from alerts in Insights\nOperator (BZ#1874018)\n\n* [4.3] Storage operator stops reconciling when going Upgradeable=False on\nv1alpha1 CRDs (BZ#1879110)\n\n* [release 4.3] OpenShift APIs become unavailable for more than 15 minutes\nafter one of master nodes went down(OAuth) (BZ#1880293)\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-x86_64\n\nThe image digest is\nsha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-s390x\nThe image digest is\nsha256:605ddde0442e604cfe2d6bd1541ce48df5956fe626edf9cc95b1fca75d231b64\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-ppc64le\n\nThe image digest is\nsha256:d3c9e391c145338eae3feb7f6a4e487dadc8139a353117d642fe686d277bcccc\n\n3. Solution:\n\nFor OpenShift Container Platform 4.3 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.3/updating/updating-cluster\n- -cli.html. Bugs fixed (https://bugzilla.redhat.com/):\n\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1836815 - Gather image registry config (backport to 4.3)\n1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist\n1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator\n1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized\n1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs\n\n5. \n\nAnsible Automation Platform manages Ansible Platform jobs and workflows\nthat can interface with any infrastructure on a Red Hat OpenShift Container\nPlatform cluster, or on a traditional infrastructure that is running\noff-cluster. Bugs fixed (https://bugzilla.redhat.com/):\n\n1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module\n1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values\n1916813 - CVE-2021-20191 ansible: multiple modules expose secured values\n1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option\n1939349 - CVE-2021-3447 ansible: multiple modules expose secured values\n\n5. Description:\n\nRed Hat 3scale API Management delivers centralized API management features\nthrough a distributed, cloud-hosted layer. It includes built-in features to\nhelp in building a more successful API program, including access control,\nrate limits, payment gateway integration, and developer experience tools. \n\nThis advisory is intended to use with container images for Red Hat 3scale\nAPI Management 2.10.0. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n\n5. Description:\n\nRed Hat OpenShift Do (odo) is a simple CLI tool for developers to create,\nbuild, and deploy applications on OpenShift. The odo tool is completely\nclient-based and requires no server within the OpenShift cluster for\ndeployment. It detects changes to local code and deploys it to the cluster\nautomatically, giving instant feedback to validate changes in real-time. It\nsupports multiple programming languages and frameworks. \n\nThe advisory addresses the following issues:\n\n* Re-release of odo-init-image 1.1.3 for security updates\n\n3. Solution:\n\nDownload and install a new CLI binary by following the instructions linked\nfrom the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1832983 - Release of 1.1.3 odo-init-image\n\n5. ==========================================================================\nUbuntu Security Notice USN-4352-1\nMay 06, 2020\n\nopenldap vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n- Ubuntu 19.10\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 LTS\n\nSummary:\n\nOpenLDAP could be made to crash if it received specially crafted network\ntraffic. A\nremote attacker could possibly use this issue to cause OpenLDAP to consume\nresources, resulting in a denial of service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n slapd 2.4.49+dfsg-2ubuntu1.2\n\nUbuntu 19.10:\n slapd 2.4.48+dfsg-1ubuntu1.1\n\nUbuntu 18.04 LTS:\n slapd 2.4.45+dfsg-1ubuntu1.5\n\nUbuntu 16.04 LTS:\n slapd 2.4.42+dfsg-2ubuntu3.8\n\nIn general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: openldap security update\nAdvisory ID: RHSA-2020:4041-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:4041\nIssue date: 2020-09-29\nCVE Names: CVE-2020-12243\n====================================================================\n1. Summary:\n\nAn update for openldap is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nOpenLDAP is an open-source suite of Lightweight Directory Access Protocol\n(LDAP) applications and development tools. LDAP is a set of protocols used\nto access and maintain distributed directory information services over an\nIP network. The openldap packages contain configuration files, libraries,\nand documentation for OpenLDAP. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nopenldap-2.4.44-22.el7.src.rpm\n\nx86_64:\nopenldap-2.4.44-22.el7.i686.rpm\nopenldap-2.4.44-22.el7.x86_64.rpm\nopenldap-clients-2.4.44-22.el7.x86_64.rpm\nopenldap-debuginfo-2.4.44-22.el7.i686.rpm\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nopenldap-debuginfo-2.4.44-22.el7.i686.rpm\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\nopenldap-devel-2.4.44-22.el7.i686.rpm\nopenldap-devel-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-sql-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nopenldap-2.4.44-22.el7.src.rpm\n\nx86_64:\nopenldap-2.4.44-22.el7.i686.rpm\nopenldap-2.4.44-22.el7.x86_64.rpm\nopenldap-clients-2.4.44-22.el7.x86_64.rpm\nopenldap-debuginfo-2.4.44-22.el7.i686.rpm\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nopenldap-debuginfo-2.4.44-22.el7.i686.rpm\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\nopenldap-devel-2.4.44-22.el7.i686.rpm\nopenldap-devel-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-sql-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nopenldap-2.4.44-22.el7.src.rpm\n\nppc64:\nopenldap-2.4.44-22.el7.ppc.rpm\nopenldap-2.4.44-22.el7.ppc64.rpm\nopenldap-clients-2.4.44-22.el7.ppc64.rpm\nopenldap-debuginfo-2.4.44-22.el7.ppc.rpm\nopenldap-debuginfo-2.4.44-22.el7.ppc64.rpm\nopenldap-devel-2.4.44-22.el7.ppc.rpm\nopenldap-devel-2.4.44-22.el7.ppc64.rpm\nopenldap-servers-2.4.44-22.el7.ppc64.rpm\n\nppc64le:\nopenldap-2.4.44-22.el7.ppc64le.rpm\nopenldap-clients-2.4.44-22.el7.ppc64le.rpm\nopenldap-debuginfo-2.4.44-22.el7.ppc64le.rpm\nopenldap-devel-2.4.44-22.el7.ppc64le.rpm\nopenldap-servers-2.4.44-22.el7.ppc64le.rpm\n\ns390x:\nopenldap-2.4.44-22.el7.s390.rpm\nopenldap-2.4.44-22.el7.s390x.rpm\nopenldap-clients-2.4.44-22.el7.s390x.rpm\nopenldap-debuginfo-2.4.44-22.el7.s390.rpm\nopenldap-debuginfo-2.4.44-22.el7.s390x.rpm\nopenldap-devel-2.4.44-22.el7.s390.rpm\nopenldap-devel-2.4.44-22.el7.s390x.rpm\nopenldap-servers-2.4.44-22.el7.s390x.rpm\n\nx86_64:\nopenldap-2.4.44-22.el7.i686.rpm\nopenldap-2.4.44-22.el7.x86_64.rpm\nopenldap-clients-2.4.44-22.el7.x86_64.rpm\nopenldap-debuginfo-2.4.44-22.el7.i686.rpm\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\nopenldap-devel-2.4.44-22.el7.i686.rpm\nopenldap-devel-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nopenldap-debuginfo-2.4.44-22.el7.ppc64.rpm\nopenldap-servers-sql-2.4.44-22.el7.ppc64.rpm\n\nppc64le:\nopenldap-debuginfo-2.4.44-22.el7.ppc64le.rpm\nopenldap-servers-sql-2.4.44-22.el7.ppc64le.rpm\n\ns390x:\nopenldap-debuginfo-2.4.44-22.el7.s390x.rpm\nopenldap-servers-sql-2.4.44-22.el7.s390x.rpm\n\nx86_64:\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-sql-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nopenldap-2.4.44-22.el7.src.rpm\n\nx86_64:\nopenldap-2.4.44-22.el7.i686.rpm\nopenldap-2.4.44-22.el7.x86_64.rpm\nopenldap-clients-2.4.44-22.el7.x86_64.rpm\nopenldap-debuginfo-2.4.44-22.el7.i686.rpm\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\nopenldap-devel-2.4.44-22.el7.i686.rpm\nopenldap-devel-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-2.4.44-22.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nopenldap-debuginfo-2.4.44-22.el7.x86_64.rpm\nopenldap-servers-sql-2.4.44-22.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-12243\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX3Of5NzjgjWX9erEAQjUBg/+LuTU5msGMECYNN1kTZeKEOLCX9BedipK\njEUYzVTDdrrVglmfre4vnt8I5vLaVHoWD9Azv/0T7C7PqoDQTa+DuXgmUJ0gST8u\nMVhEsiDzTb2JPEPT0G5Mn/S7bL5buthYDlHJxTlnPimuvYBYIRRnP/65Kw0KnKyH\nJd0lheTvX0I6MbH+vArqU6LHeX21tvfPHlqfPWz3adCvqk7T0mKTM2N2qbeaeyMk\nNPkqy4L/79s897+76c8PaS9VNIC+zTq78V24n/VXE29tYr6lz5AI/PsyqqAg9u2W\nRwfngfaX47EBTWo5z+Wm3q+Jr2zpv2zEBOu0yxl/PUH0Knk2S5pu1u7Ou7jDC3ty\n4mCWo50wLOjkXspYQ1TWBhlGTe2fTVhH3l5emSR2z7y8bOKXR+GTS16uJ/un/Plr\n0AU3pnJNPTtEYGzvNRNrw2IFsN3TAnhZnve0LerryIsyc/3tz6UhdeLKCw5lScYl\nljGRanFYnwLL9+/h0CgjudrjtkB7F0SYNwiuSvr4yeAGG+/B6KFvtdii99azWhKf\nBqT1maqEizgtGaWIenkEMHYWHReC79Q+0DC9cyZGe5NJlndXZP0i1IkzL6wOLbAS\nDFqkF35KUgcQFh+kyPblKhX3HK3ZtBEFTeoV6rEQsgV8bU9HqFd1rjt/805/rIjk\nZiAkpTmTglI=6TQF\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nFor the oldstable distribution (stretch), this problem has been fixed\nin version 2.4.44+dfsg-5+deb9u4. \n\nFor the stable distribution (buster), this problem has been fixed in\nversion 2.4.47+dfsg-3+deb10u2. \n\nFor the detailed security status of openldap please refer to its\nsecurity tracker page at:\nhttps://security-tracker.debian.org/tracker/openldap\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAl6ofsxfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0Qx4Q//dOnPiP6bKHrFUFtyv59tV5Zpa1jJ6BmIr3/5ueODnBu8MHLJw8503zLJ\nI43LDTzvGkXrxy0Y28YC5Qpv1oHW3gvPzFsTrn2DObeUnHlKOOUsyzz3saHXyyzQ\nki+2UGsUXydSazDMeJzcoMfRdVpCtjc+GNTb/y7nxgwoKrz/WJplGstp2ibd8ftv\nJu4uT8VJZcC3IEGhkYXJ7TENlegOK2FCewYMZARrNT/tjIDyAqfKi2muCg7oadx/\n5WZGLW7Pdw25jFknVy/Y7fEyJDWQdPH7NchK5tZy6D1lWQh67GcvJFSo5HICwb+n\nFilP29mIBbS96JQq6u5jWWMpAD6RPCtIltak4QdYptjdrQnTDFy3RJSTdZeis8ty\nHKwYJgNzVG6SCy04t3D+zeMbgEZOvj6GWrURQUqZJQmc4V9l89E0/D7zV3AX9Q9v\n0hKEtpc//bZrS71QVqJvkWvrgfutB72Vnqfull+DBxvt33ma5W2il6kxGMwJK3S9\n0lk60dzEDCdYp8TE61y8N4z+2IB/Otg9Ni2I8pmaE5s1/ZUva+8GhSjbmGyIhbpk\np55kTiZUgpmu6EK2Kvjkh9rMlaa1IHXL8tdrbo8pRVtQHlA8/HUgoGiUHuX1h+Kw\nLZVjIV/L4qOFQ54uMbSscZgMEvhfW00fe3o2zI8WQZ9IPCQ3oRg=\n=K3JD\n-----END PGP SIGNATURE-----\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-12243"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159347"
},
{
"db": "PACKETSTORM",
"id": "168811"
}
],
"trust": 2.34
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-12243",
"trust": 3.2
},
{
"db": "PACKETSTORM",
"id": "159347",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "161916",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "162130",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "162142",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "157602",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "161727",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159553",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.1207",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1637",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2604",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0845",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1742.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3631",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1742",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1458",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0986",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3535",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1193",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1569",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1613",
"trust": 0.6
},
{
"db": "ICS CERT",
"id": "ICSA-22-116-01",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "157601",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "159552",
"trust": 0.1
},
{
"db": "CNVD",
"id": "CNVD-2020-27485",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-164902",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159661",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168811",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159347"
},
{
"db": "PACKETSTORM",
"id": "168811"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"id": "VAR-202004-0530",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
}
],
"trust": 0.725
},
"last_update_date": "2024-07-23T21:32:40.951000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "[SECURITY] [DLA 2199-1] openldap security update",
"trust": 0.8,
"url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00001.html"
},
{
"title": "DSA-4666",
"trust": 0.8,
"url": "https://www.debian.org/security/2020/dsa-4666"
},
{
"title": "Issue#9248",
"trust": 0.8,
"url": "https://git.openldap.org/openldap/openldap/-/blob/openldap_rel_eng_2_4/changes"
},
{
"title": "ITS#9202 limit depth of nested filters",
"trust": 0.8,
"url": "https://git.openldap.org/openldap/openldap/-/commit/98464c11df8247d6a11b52e294ba5dd4f0380440"
},
{
"title": "Issue 9202",
"trust": 0.8,
"url": "https://bugs.openldap.org/show_bug.cgi?id=9202"
},
{
"title": "OpenLDAP Remediation of resource management error vulnerabilities",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=118093"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-674",
"trust": 1.1
},
{
"problemtype": "CWE-400",
"trust": 0.9
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243"
},
{
"trust": 1.7,
"url": "https://git.openldap.org/openldap/openldap/-/blob/openldap_rel_eng_2_4/changes"
},
{
"trust": 1.7,
"url": "https://git.openldap.org/openldap/openldap/-/commit/98464c11df8247d6a11b52e294ba5dd4f0380440"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20200511-0003/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht211289"
},
{
"trust": 1.7,
"url": "https://www.debian.org/security/2020/dsa-4666"
},
{
"trust": 1.7,
"url": "https://bugs.openldap.org/show_bug.cgi?id=9202"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00001.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-05/msg00016.html"
},
{
"trust": 1.7,
"url": "https://usn.ubuntu.com/4352-1/"
},
{
"trust": 1.7,
"url": "https://usn.ubuntu.com/4352-2/"
},
{
"trust": 0.8,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-12243"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1742.2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3535/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1458/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1569/"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-116-01"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159553/red-hat-security-advisory-2020-4255-01.html"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht211289"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0986"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/openldap-denial-of-service-via-search-filters-32124"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1207"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2604"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159347/red-hat-security-advisory-2020-4041-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161727/red-hat-security-advisory-2021-0778-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/157602/ubuntu-security-notice-usn-4352-2.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1637/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161916/red-hat-security-advisory-2021-0949-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162142/red-hat-security-advisory-2021-1079-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1613/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1193"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3631/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1742/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162130/red-hat-security-advisory-2021-1129-01.html"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-12243"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-12749"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17023"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-6829"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12403"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-20388"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11756"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17498"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-7595"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17006"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5094"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-19956"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12400"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11727"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12402"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12401"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11719"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5188"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-20907"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971"
},
{
"trust": 0.3,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-8177"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-1971"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-19126"
},
{
"trust": 0.2,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14973"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17546"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2017-12652"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9283"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19126"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4264"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-2974"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2226"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2780"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-16935"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2974"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2752"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20386"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14352"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2225"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12825"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2181"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2182"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.3/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-18190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2224"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:1079"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3156"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3447"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-5313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20180"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-5313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14422"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25211"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:1129"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12723"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25645"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28374"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14351"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25705"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.10/html-single/installing_3scale/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29661"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20265"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0427"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14351"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19532"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12723"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7053"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19532"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0949"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8177"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_developer_cli/installing-odo.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-7595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-6829"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.48+dfsg-1ubuntu1.1"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4352-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.42+dfsg-2ubuntu3.8"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.45+dfsg-1ubuntu1.5"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.49+dfsg-2ubuntu1.2"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/openldap"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159347"
},
{
"db": "PACKETSTORM",
"id": "168811"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159347"
},
{
"db": "PACKETSTORM",
"id": "168811"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2020-04-28T00:00:00",
"db": "VULHUB",
"id": "VHN-164902"
},
{
"date": "2020-06-05T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"date": "2020-10-21T15:40:32",
"db": "PACKETSTORM",
"id": "159661"
},
{
"date": "2021-04-09T15:06:13",
"db": "PACKETSTORM",
"id": "162142"
},
{
"date": "2021-04-08T14:00:00",
"db": "PACKETSTORM",
"id": "162130"
},
{
"date": "2021-03-22T15:36:55",
"db": "PACKETSTORM",
"id": "161916"
},
{
"date": "2020-05-07T15:33:27",
"db": "PACKETSTORM",
"id": "157601"
},
{
"date": "2020-09-30T15:43:05",
"db": "PACKETSTORM",
"id": "159347"
},
{
"date": "2020-04-28T19:12:00",
"db": "PACKETSTORM",
"id": "168811"
},
{
"date": "2020-04-28T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"date": "2020-04-28T19:15:12.267000",
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-04-29T00:00:00",
"db": "VULHUB",
"id": "VHN-164902"
},
{
"date": "2020-06-05T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"date": "2022-04-27T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"date": "2022-04-29T13:24:38.527000",
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "OpenLDAP Resource exhaustion vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
],
"trust": 0.6
}
}
VAR-202208-0404
Vulnerability from variot - Updated: 2024-07-23 21:15zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference). See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- github.com/Masterminds/vcs: Command Injection via argument injection (CVE-2022-21235)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, ppc64le, and aarch64 architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags.
The sha values for the release are
(For x86_64 architecture) The image digest is sha256:c6771b12bd873c0e3e5fbc7afa600d92079de6534dcb52f09cb1d22ee49608a9
(For s390x architecture) The image digest is sha256:622b5361f95d1d512ea84f363ac06155cbb9ee28e85ccaae1acd80b98b660fa8
(For ppc64le architecture) The image digest is sha256:50c131cf85dfb00f258af350a46b85eff8fb8084d3e1617520cd69b59caeaff7
(For aarch64 architecture) The image digest is sha256:9e575c4ece9caaf31acbef246ccad71959cd5bf634e7cb284b0849ddfa205ad7
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2215317 - CVE-2022-21235 github.com/Masterminds/vcs: Command Injection via argument injection
- JIRA issues fixed (https://issues.redhat.com/):
OCPBUGS-15446 - (release-4.11) gather "gateway-mode-config" config map from "openshift-network-operator" namespace OCPBUGS-15532 - visiting Configurations page returns error Cannot read properties of undefined (reading 'apiGroup') OCPBUGS-15645 - Can't use git lfs in BuildConfig git source with strategy Docker OCPBUGS-15739 - Environment cannot find Python OCPBUGS-15758 - [release-4.11] Bump Jenkins and Jenkins Agent Base image versions OCPBUGS-15942 - 9% of OKD tests failing on error: tag latest failed: Internal error occurred: registry.centos.org/dotnet/dotnet-31-centos7:latest: Get "https://registry.centos.org/v2/": dial tcp: lookup registry.centos.org on 172.30.0.10:53: no such host OCPBUGS-15966 - [4.12] MetalLB contains incorrect data Correct and incorrect MetalLB resources coexist should have correct statuses
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.13.0 security and bug fix update Advisory ID: RHSA-2023:3742-02 Product: Red Hat OpenShift Data Foundation Advisory URL: https://access.redhat.com/errata/RHSA-2023:3742 Issue date: 2023-06-21 CVE Names: CVE-2015-20107 CVE-2018-25032 CVE-2020-10735 CVE-2020-16250 CVE-2020-16251 CVE-2020-17049 CVE-2021-3765 CVE-2021-3807 CVE-2021-4231 CVE-2021-4235 CVE-2021-4238 CVE-2021-28861 CVE-2021-43519 CVE-2021-43998 CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2021-44964 CVE-2021-46828 CVE-2021-46848 CVE-2022-0670 CVE-2022-1271 CVE-2022-1304 CVE-2022-1348 CVE-2022-1586 CVE-2022-1587 CVE-2022-2309 CVE-2022-2509 CVE-2022-2795 CVE-2022-2879 CVE-2022-2880 CVE-2022-3094 CVE-2022-3358 CVE-2022-3515 CVE-2022-3517 CVE-2022-3715 CVE-2022-3736 CVE-2022-3821 CVE-2022-3924 CVE-2022-4415 CVE-2022-21824 CVE-2022-23540 CVE-2022-23541 CVE-2022-24903 CVE-2022-26280 CVE-2022-27664 CVE-2022-28805 CVE-2022-29154 CVE-2022-30635 CVE-2022-31129 CVE-2022-32189 CVE-2022-32190 CVE-2022-33099 CVE-2022-34903 CVE-2022-35737 CVE-2022-36227 CVE-2022-37434 CVE-2022-38149 CVE-2022-38900 CVE-2022-40023 CVE-2022-40303 CVE-2022-40304 CVE-2022-40897 CVE-2022-41316 CVE-2022-41715 CVE-2022-41717 CVE-2022-41723 CVE-2022-41724 CVE-2022-41725 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-42919 CVE-2022-43680 CVE-2022-45061 CVE-2022-45873 CVE-2022-46175 CVE-2022-47024 CVE-2022-47629 CVE-2022-48303 CVE-2022-48337 CVE-2022-48338 CVE-2022-48339 CVE-2023-0361 CVE-2023-0620 CVE-2023-0665 CVE-2023-2491 CVE-2023-22809 CVE-2023-24329 CVE-2023-24999 CVE-2023-25000 CVE-2023-25136 =====================================================================
- Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available in Red Hat Container Registry for Red Hat OpenShift Data Foundation 4.13.0 on Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be (CVE-2021-4238)
-
decode-uri-component: improper input validation resulting in DoS (CVE-2022-38900)
-
vault: Hashicorp Vault AWS IAM Integration Authentication Bypass (CVE-2020-16250)
-
vault: GCP Auth Method Allows Authentication Bypass (CVE-2020-16251)
-
nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes (CVE-2021-3807)
-
go-yaml: Denial of Service in go-yaml (CVE-2021-4235)
-
vault: incorrect policy enforcement (CVE-2021-43998)
-
nodejs: Improper handling of URI Subject Alternative Names (CVE-2021-44531)
-
nodejs: Certificate Verification Bypass via String Injection (CVE-2021-44532)
-
nodejs: Incorrect handling of certificate subject and issuer fields (CVE-2021-44533)
-
golang: archive/tar: unbounded memory consumption when reading headers (CVE-2022-2879)
-
golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters (CVE-2022-2880)
-
nodejs-minimatch: ReDoS via the braceExpand function (CVE-2022-3517)
-
jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to signature validation bypass (CVE-2022-23540)
-
jsonwebtoken: Insecure implementation of key retrieval function could lead to Forgeable Public/Private Tokens from RSA to HMAC (CVE-2022-23541)
-
golang: net/http: handle server errors after sending GOAWAY (CVE-2022-27664)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/url: JoinPath does not strip relative path components in all circumstances (CVE-2022-32190)
-
consul: Consul Template May Expose Vault Secrets When Processing Invalid Input (CVE-2022-38149)
-
vault: insufficient certificate revocation list checking (CVE-2022-41316)
-
golang: regexp/syntax: limit memory used by parsing regexps (CVE-2022-41715)
-
golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)
-
net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding (CVE-2022-41723)
-
golang: crypto/tls: large handshake records may cause panics (CVE-2022-41724)
-
golang: net/http, mime/multipart: denial of service from excessive resource consumption (CVE-2022-41725)
-
json5: Prototype Pollution in JSON5 via Parse Method (CVE-2022-46175)
-
vault: Vault’s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File (CVE-2023-0620)
-
hashicorp/vault: Vault’s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata (CVE-2023-0665)
-
Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to Role During a Destroy Operation (CVE-2023-24999)
-
hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations (CVE-2023-25000)
-
validator: Inefficient Regular Expression Complexity in Validator.js (CVE-2021-3765)
-
nodejs: Prototype pollution via console.table properties (CVE-2022-21824)
-
golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service (CVE-2022-32189)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images that provide numerous bug fixes and enhancements.
- Bugs fixed (https://bugzilla.redhat.com/):
1786696 - UI->Dashboards->Overview->Alerts shows MON components are at different versions, though they are NOT 1855339 - Wrong version of ocs-storagecluster 1943137 - [Tracker for BZ #1945618] rbd: Storage is not reclaimed after persistentvolumeclaim and job that utilized it are deleted 1944687 - [RFE] KMS server connection lost alert 1989088 - [4.8][Multus] UX experience issues and enhancements 2005040 - Uninstallation of ODF StorageSystem via OCP Console fails, gets stuck in Terminating state 2005830 - [DR] DRPolicy resource should not be editable after creation 2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2028193 - CVE-2021-43998 vault: incorrect policy enforcement 2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names 2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection 2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields 2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties 2042914 - [Tracker for BZ #2013109] [UI] Refreshing web console from the pop-up is taking to Install Operator page. 2052252 - CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2022-21824 [CVE] nodejs: various flaws [openshift-data-foundation-4] 2101497 - ceph_mon_metadata metrics are not collected properly 2101916 - must-gather is not collecting ceph logs or coredumps 2102304 - [GSS] Remove the entry of removed node from Storagecluster under Node Topology 2104148 - route ocs-storagecluster-cephobjectstore misconfigured to use http and https on same http route in haproxy.config 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2115020 - [RDR] Sync schedule is not removed from mirrorpeer yaml after DR Policy is deleted 2115616 - [GSS] failing to change ownership of the NFS based PVC for PostgreSQL pod by using kube_pv_chown utility 2119551 - CVE-2022-38149 consul: Consul Template May Expose Vault Secrets When Processing Invalid Input 2120098 - [RDR] Even before an action gets fully completed, PeerReady and Available are reported as True in the DRPC yaml 2120944 - Large Omap objects found in pool 'ocs-storagecluster-cephfilesystem-metadata' 2124668 - CVE-2022-32190 golang: net/url: JoinPath does not strip relative path components in all circumstances 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2126299 - CVE-2021-3765 validator: Inefficient Regular Expression Complexity in Validator.js 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking 2139037 - [cee/sd]Unable to access s3 via RGW route ocs-storagecluster-cephobjectstore 2141095 - [RDR] Storage System page on ACM Hub is visible even when data observability is not enabled 2142651 - RFE: OSDs need ability to bind to a service IP instead of the pod IP to support RBD mirroring in OCP clusters 2142894 - Credentials are ignored when creating a Backing/Namespace store after prompted to enter a name for the resource 2142941 - RGW cloud Transition. HEAD/GET requests to MCG are failing with 403 error 2143944 - [GSS] unknown parameter name "FORCE_OSD_REMOVAL" 2144256 - [RDR] [UI] DR Application applied to a single DRPolicy starts showing connected to multiple policies due to console flickering 2151903 - [MCG] Azure bs/ns creation fails with target bucket does not exists 2152143 - [Noobaa Clone] Secrets are used in env variables 2154250 - NooBaa Bucket Quota alerts are not working 2155507 - RBD reclaimspace job fails when the PVC is not mounted 2155743 - ODF Dashboard fails to load 2156067 - [RDR] [UI] When Peer Ready isn't True, UI doesn't reset the error message even when no subscription group is selected 2156069 - [UI] Instances of OCS can be seen on BlockPool action modals 2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method 2156519 - 4.13: odf-csi-addons-operator failed with OwnNamespace InstallModeType not supported 2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml 2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2157876 - [OCP Tracker] [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn't appear after ODF upgrade resulting in dashboard crash 2158922 - Namespace store fails to get created via the ODF UI 2159676 - rbd-mirror logs are rotated very frequently, increase the default maxlogsize for rbd-mirror 2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests 2161879 - logging issue when deleting webhook resources 2161937 - collect kernel and journal logs from all worker nodes 2162257 - [RDR][CEPHFS] sync/replication is getting stopped for some pvc 2164617 - Unable to expand ocs-storagecluster-ceph-rbd PVCs provisioned in Filesystem mode 2165495 - Placement scheduler is using too much resources 2165504 - Sizer sharing link is broken 2165929 - [RFE] ODF bluewash introduction in 4.12.x 2165938 - ocs-operator CSV is missing disconnected env annotation. 2165984 - [RDR] Replication stopped for images is represented with incorrect color 2166222 - CSV is missing disconnected env annotation and relatedImages spec 2166234 - Application user unable to invoke Failover and Relocate actions 2166869 - Match the version of consoleplugin to odf operator 2167299 - [RFE] ODF bluewash introduction in 4.12.x 2167308 - [mcg-clone] Security and VA issues with ODF operator 2167337 - CVE-2020-16250 vault: Hashicorp Vault AWS IAM Integration Authentication Bypass 2167340 - CVE-2020-16251 vault: GCP Auth Method Allows Authentication Bypass 2167946 - CSV is missing disconnected env annotation and relatedImages spec 2168113 - [Ceph Tracker BZ #2141110] [cee/sd][Bluestore] Newly deployed bluestore OSD's showing high fragmentation score 2168635 - fix redirect link to operator details page (OCS dashboard) 2168840 - [Fusion-aaS][ODF 4.13]Within 'prometheus-ceph-rules' the namespace for 'rook-ceph-mgr' jobs should be configurable. 2168849 - Must-gather doesn't collect coredump logs crucial for OSD crash events 2169375 - CVE-2022-23541 jsonwebtoken: Insecure implementation of key retrieval function could lead to Forgeable Public/Private Tokens from RSA to HMAC 2169378 - CVE-2022-23540 jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to signature validation bypass 2169779 - [vSphere]: rook-ceph-mon- pvc are in pending state 2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS 2170673 - [RDR] Different replication states of PVC images aren't correctly distinguished and representated on UI 2172089 - [Tracker for Ceph BZ 2174461] rook-ceph-nfs pod is stuck at status 'CreateContainerError' after enabling NFS in ODF 4.13 2172365 - [csi-addons] odf-csi-addons-operator oomkilled with fresh installation 4.12 2172521 - No OSD pods are created for 4.13 LSO deployment 2173161 - ODF-console can not start when you disable IPv6 on Node with kernel parameter. 2173528 - Creation of OCS operator tag automatically for verified commits 2173534 - When on StorageSystem details click on History back btn it shows blank body 2173926 - [RFE] Include changes in MCG for new Ceph RGW transition headers 2175612 - noobaa-core-0 crashing and storagecluster not getting to ready state during ODF deployment with FIPS enabled in 4.13cluster 2175685 - RGW OBC creation via the UI is blocked by "Address form errors to proceed" error 2175714 - UI fix- capitalization 2175867 - Rook sets cephfs kernel mount options even when mon is using v1 port 2176080 - odf must-gather should collect output of oc get hpa -n openshift-storage 2176456 - [RDR] ramen-hub-operator and ramen-dr-cluster-operator is going into CLBO post deployment 2176739 - [UI] CSI Addons operator icon is broken 2176776 - Enable save options only when the protected apps has labels for manage DRPolicy 2176798 - [IBM Z ] Multi Cluster Orchestrator operator is not available in the Operator Hub 2176809 - [IBM Z ] DR operator is not available in the Operator Hub 2177134 - Next button if disabled for storage system deployment flow for IBM Ceph Storage security and network step when there is no OCS installed already 2177221 - Enable DR dashboard only when ACM observability is enabled 2177325 - Noobaa-db pod is taking longer time to start up in ODF 4.13 2177695 - DR dashbaord showing incorrect RPO data 2177844 - CVE-2023-24999 Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to Role During a Destroy Operation 2178033 - node topology warnings tab doesn't show pod warnings 2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding 2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption 2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics 2178588 - No rack names on ODF Topology 2178619 - odf-operator failing to resolve its sub-dependencies leaving the ocs-consumer/provider addon in a failed and halted state 2178682 - [GSS] Add the valid AWS GovCloud regions in OCS UI. 2179133 - [UI] A blank page appears while selecting Storage Pool for creating Encrypted Storage Class 2179337 - Invalid storage system href link on the ODF multicluster dashboard 2179403 - (4.13) Mons are failing to start when msgr2 is required with RHCS 6.1 2179846 - [IBM Z] In RHCS external mode Cephobjectstore creation fails as it reports that the "object store name cannot be longer than 38 characters" 2179860 - [MCG] Bucket replication with deletion sync isn't complete 2179976 - [ODF 4.13] Missing the status-reporter binary causing pods "report-status-to-provider" remain in CreateContainerError on ODF to ODF cluster on ROSA 2179981 - ODF Topology search bar mistakes to find searched node/pod 2179997 - Topology. Exit full screen does not appear in Full screen mode 2180211 - StorageCluster stuck in progressing state for Thales KMS deployment 2180397 - Last sync time is missing on application set's disaster recovery status popover 2180440 - odf-monitoring-tool. YAML file misjudged as corrupted 2180921 - Deployment with external cluster in ODF 4.13 with unable to use cephfs as backing store for image_registry 2181112 - [RDR] [UI] Hide disable DR functionality as it would be un-tested in 4.13 2181133 - CI: backport E2E job improvements 2181446 - [KMS][UI] PVC provisioning failed in case of vault kubernetes authentication is configured. 2181535 - [GSS] Object storage in degraded state 2181551 - Build: move to 'dependencies' the ones required for running a build 2181832 - Create OBC via UI, placeholder on StorageClass dropped 2181949 - [ODF Tracker] [RFE] Catch MDS damage to the dentry's first snapid 2182041 - OCS-Operator expects NooBaa CRDs to be present on the cluster when installed directly without ODF Operator 2182296 - [Fusion-aaS][ODF 4.13]must-gather does not collect relevant logs when storage cluster is not in openshift-storage namespace 2182375 - [MDR] Not able to fence DR clusters 2182644 - [IBM Z] MDR policy creation fails unless the ocs-operator pod is restarted on the managed clusters 2182664 - Topology view should hide the sidebar when changing levels 2182703 - [RDR] After upgrading from 4.12.2 to 4.13.0 version.odf.openshift.io cr is not getting updated with latest ODF version 2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations 2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata 2183155 - failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call 2183196 - [Fusion-aaS] Collect Must-gather logs from the managed-fusion agent namesapce 2183266 - [Fusion aaS Rook ODF 4.13]] Rook-ceph-operator pod should allow OBC CRDs to be optional instead of causing a crash when not present 2183457 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1] 2183478 - [MDR][UI] Cannot relocate subscription based apps, Appset based apps are possible to relocate 2183520 - [Fusion-aaS] csi-cephfs-plugin pods are not created after installing ocs-client-operator 2184068 - [Fusion-aaS] Failed to mount CephFS volumes while creating pods 2184605 - [ODF 4.13][Fusion-aaS] OpenShift Data Foundation Client operator is listed in OperatorHub and installable from UI 2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File 2184769 - {Fusion-aaS][ODF 4.13]Remove storageclassclaim cr and create new cr storageclass request cr 2184773 - multicluster-orchestrator should not reset spec.network.multiClusterService.Enabled field added by user 2184892 - Don't pass encryption options to ceph cluster in odf external mode to provider/consumer cluster 2184984 - Topology Sidebar alerts panel: alerts accordion does not toggle when clicking on alert severity text 2185164 - [KMS][VAULT] PVC provisioning is failing when the Vault (HCP) Kubernetes authentication is set. 2185188 - Fix storagecluster watch request for OCSInitialization 2185757 - add NFS dashboard 2185871 - [MDR][ACM-Tracker] Deleting an Appset based application does not delete its placement 2186171 - [GSS] "disableLoadBalancerService: true" config is reconciled after modifying the number of NooBaa endpoints 2186225 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1] 2186475 - handle different network connection spec & Pass appropriate options for all the cases of Network Spec 2186752 - [translations] add translations for 4.13 2187251 - sync ocs and odf with the latest rook 2187296 - [MCG] Can't opt out of deletions sync once log-based replication with deletions sync is set 2187736 - [RDR] Replication history graph is showing incorrect value 2187952 - When cluster controller is cancelled frequently, multiple simultaneous controllers cause issues since need to wait for shutdown before continuing new controller 2187969 - [ODFMS-Migration ] [OCS Client Operator] csi-rbdplugin stuck in ImagePullBackOff on consumer clusters after Migration 2187986 - [MDR] ramen-dr-cluster-operator pod is in CLBO after assigning dr policy to an appset based app 2188053 - ocs-metrics-exporter cannot list/watch StorageCluster, StorageClass, CephBlockPool and other resources 2188238 - [RDR] Avoid using the terminologies "SLA" in DR dashbaord 2188303 - [RDR] Maintenance mode is not enabled after initiating failover action 2188427 - [External mode upgrade]: Upgrade from 4.12 -> 4.13 external mode is failing because rook-ceph-operator is not reaching clean state 2188666 - wrong label in new storageclassrequest cr 2189483 - After upgrade noobaa-db-pg-0 pod using old image in one of container 2189929 - [RDR/MDR] [UI] Dashboard fon size are very uneven 2189982 - [RDR] ocs_rbd_client_blocklisted datapoints and the corresponding alert is not getting generated 2189984 - [KMS][VAULT] Storage cluster remains in 'Progressing' state during deployment with storage class encryption, despite all pods being up and running. 2190129 - OCS Provider Server logs are incorrect 2190241 - nfs metric details are unavailable and server health is displaying as "Degraded" under Network file system tab in UI 2192088 - [IBM P] rbd_default_map_options value not set to ms_mode=secure in in-transit encryption enabled ODF cluster 2192670 - Details tab for nodes inside Topology throws "Something went wrong" on IBM Power platform 2192824 - [4.13] Fix Multisite in external cluster 2192875 - Enable ceph-exporter in rook 2193114 - MCG replication is failing due to OC binary incompatible on Power platform 2193220 - [Stretch cluster] CephCluster is updated frequently due to changing ordering of zones 2196176 - MULTUS UI, There is no option to change the multus configuration after we configure the params 2196236 - [RDR] With ACM 2.8 User is not able to apply Drpolicy to subscription workload 2196298 - [RDR] DRPolicy doesn't show connected application when subscription based workloads are deployed via CLI 2203795 - ODF Monitoring is missing some of the ceph_ metric values 2208029 - nfs server health is always displaying as "Degraded" under Network file system tab in UI. 2208079 - rbd mirror daemon is commonly not upgraded 2208269 - [RHCS Tracker] After add capacity the rebalance does not complete, and we see 2 PGs in active+clean+scrubbing and 1 active+clean+scrubbing+deep 2208558 - [MDR] ramen-dr-cluster-operator pod crashes during failover 2208962 - [UI] ODF Topology. Degraded cluster don't show red canvas on cluster level 2209364 - ODF dashboard crashes when OCP and ODF are upgraded 2209643 - Multus, Cephobjectstore stuck on Progressing state because " failed to create or retrieve rgw admin ops user" 2209695 - When collecting Must-gather logs shows /usr/bin/gather_ceph_resources: line 341: jq: command not found 2210964 - [UI][MDR] After hub recovery in overview tab of data policies Application set apps count is not showing 2211334 - The replication history graph is very unclear 2211343 - [MCG-Only]: upgrade failed from 4.12 to 4.13 due to missing CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs-operator-config 2211704 - Multipart uploads fail to a Azure namespace bucket when user MD is sent as part of the upload
- References:
https://access.redhat.com/security/cve/CVE-2015-20107 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2020-10735 https://access.redhat.com/security/cve/CVE-2020-16250 https://access.redhat.com/security/cve/CVE-2020-16251 https://access.redhat.com/security/cve/CVE-2020-17049 https://access.redhat.com/security/cve/CVE-2021-3765 https://access.redhat.com/security/cve/CVE-2021-3807 https://access.redhat.com/security/cve/CVE-2021-4231 https://access.redhat.com/security/cve/CVE-2021-4235 https://access.redhat.com/security/cve/CVE-2021-4238 https://access.redhat.com/security/cve/CVE-2021-28861 https://access.redhat.com/security/cve/CVE-2021-43519 https://access.redhat.com/security/cve/CVE-2021-43998 https://access.redhat.com/security/cve/CVE-2021-44531 https://access.redhat.com/security/cve/CVE-2021-44532 https://access.redhat.com/security/cve/CVE-2021-44533 https://access.redhat.com/security/cve/CVE-2021-44964 https://access.redhat.com/security/cve/CVE-2021-46828 https://access.redhat.com/security/cve/CVE-2021-46848 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1304 https://access.redhat.com/security/cve/CVE-2022-1348 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1587 https://access.redhat.com/security/cve/CVE-2022-2309 https://access.redhat.com/security/cve/CVE-2022-2509 https://access.redhat.com/security/cve/CVE-2022-2795 https://access.redhat.com/security/cve/CVE-2022-2879 https://access.redhat.com/security/cve/CVE-2022-2880 https://access.redhat.com/security/cve/CVE-2022-3094 https://access.redhat.com/security/cve/CVE-2022-3358 https://access.redhat.com/security/cve/CVE-2022-3515 https://access.redhat.com/security/cve/CVE-2022-3517 https://access.redhat.com/security/cve/CVE-2022-3715 https://access.redhat.com/security/cve/CVE-2022-3736 https://access.redhat.com/security/cve/CVE-2022-3821 https://access.redhat.com/security/cve/CVE-2022-3924 https://access.redhat.com/security/cve/CVE-2022-4415 https://access.redhat.com/security/cve/CVE-2022-21824 https://access.redhat.com/security/cve/CVE-2022-23540 https://access.redhat.com/security/cve/CVE-2022-23541 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-26280 https://access.redhat.com/security/cve/CVE-2022-27664 https://access.redhat.com/security/cve/CVE-2022-28805 https://access.redhat.com/security/cve/CVE-2022-29154 https://access.redhat.com/security/cve/CVE-2022-30635 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/cve/CVE-2022-32189 https://access.redhat.com/security/cve/CVE-2022-32190 https://access.redhat.com/security/cve/CVE-2022-33099 https://access.redhat.com/security/cve/CVE-2022-34903 https://access.redhat.com/security/cve/CVE-2022-35737 https://access.redhat.com/security/cve/CVE-2022-36227 https://access.redhat.com/security/cve/CVE-2022-37434 https://access.redhat.com/security/cve/CVE-2022-38149 https://access.redhat.com/security/cve/CVE-2022-38900 https://access.redhat.com/security/cve/CVE-2022-40023 https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/cve/CVE-2022-40897 https://access.redhat.com/security/cve/CVE-2022-41316 https://access.redhat.com/security/cve/CVE-2022-41715 https://access.redhat.com/security/cve/CVE-2022-41717 https://access.redhat.com/security/cve/CVE-2022-41723 https://access.redhat.com/security/cve/CVE-2022-41724 https://access.redhat.com/security/cve/CVE-2022-41725 https://access.redhat.com/security/cve/CVE-2022-42010 https://access.redhat.com/security/cve/CVE-2022-42011 https://access.redhat.com/security/cve/CVE-2022-42012 https://access.redhat.com/security/cve/CVE-2022-42898 https://access.redhat.com/security/cve/CVE-2022-42919 https://access.redhat.com/security/cve/CVE-2022-43680 https://access.redhat.com/security/cve/CVE-2022-45061 https://access.redhat.com/security/cve/CVE-2022-45873 https://access.redhat.com/security/cve/CVE-2022-46175 https://access.redhat.com/security/cve/CVE-2022-47024 https://access.redhat.com/security/cve/CVE-2022-47629 https://access.redhat.com/security/cve/CVE-2022-48303 https://access.redhat.com/security/cve/CVE-2022-48337 https://access.redhat.com/security/cve/CVE-2022-48338 https://access.redhat.com/security/cve/CVE-2022-48339 https://access.redhat.com/security/cve/CVE-2023-0361 https://access.redhat.com/security/cve/CVE-2023-0620 https://access.redhat.com/security/cve/CVE-2023-0665 https://access.redhat.com/security/cve/CVE-2023-2491 https://access.redhat.com/security/cve/CVE-2023-22809 https://access.redhat.com/security/cve/CVE-2023-24329 https://access.redhat.com/security/cve/CVE-2023-24999 https://access.redhat.com/security/cve/CVE-2023-25000 https://access.redhat.com/security/cve/CVE-2023-25136 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBZJTCdtzjgjWX9erEAQg+Bw/8DMJst89ezTMnzgSKR5q+EzfkajgA1+hZ pk9CcsCzrIISkbi+6uvkfRPe7hwHstigfswCsuh4d98lad20WKw9UUYMsFOQlGW5 Izzxf5a1Uw/pdO/61f4k6Ze7E4gANneknQiiiUFpA4lF7RkuBoeWYoB12r+Y3O/t l8CGEVAk/DBn2WVc5PL7o7683A6tS8Z5FNpyPg2tvtpdYkr1cw2+L2mcBHpiAjUr S+Jaj5/qf8Z/TIZY7vvOqr6YCDrMnbZChbvYaPCwaRqbOb1RbGW++c9hEWKnaNbm XiIgTY4d75+y7afRFoc9INZ1SjvL7476LCABGXmEEocuwHRU7K4u4rGyOXzDz5xb 3zgJO58oVr6RPHvpDsxoqOwEbhfdNpRpBcuuzAThe9w5Cnh45UnEU5sJKY/1U1qo UxBeMoFrrhUdrE4A1Gsr0GcImh6JDJXweIJe1C6FI9e3/J5HM7mR4Whznz+DslXL CNmmPWs5afjrrgVVaDuDYq3m7lwuCTODHRVSeWGrtyhnNc6RNtjJi9fumqavP07n 8lc4v4c56lMVDpwQQkYMCJEzHrYDWeFDza9KdDbddvLtkoYXxJQiGwp0BZne1ArV lU3PstRRagnbV6yf/8LPSaSQZAVBnEe2YoF83gJbpFEhYimOCHS9BzC0qce7lypR vhbUlNurVkU= =4jwh -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays
- JIRA issues fixed (https://issues.jboss.org/):
LOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node
- Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.
For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.
Security fixes:
- CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY
- CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters
- CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps
- CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
Bugs addressed:
- subctl diagnose firewall metrics does not work on merged kubeconfig (BZ# 2013711)
- [Submariner] - Fails to increase gateway amount after deployment (BZ# 2097381)
- Submariner gateway node does not get deleted with subctl cloud cleanup command (BZ# 2108634)
- submariner GW pods are unable to resolve the DNS of the Broker K8s API URL (BZ# 2119362)
- Submariner gateway node does not get deployed after applying ManagedClusterAddOn on Openstack (BZ# 2124219)
- unable to run subctl benchmark latency, pods fail with ImagePullBackOff (BZ# 2130326)
- [IBM Z] - Submariner addon unistallation doesnt work from ACM console (BZ# 2136442)
- Tags on AWS security group for gateway node break cloud-controller LoadBalancer (BZ# 2139477)
- RHACM - Submariner: UI support for OpenStack #19297 (ACM-1242)
- Submariner OVN support (ACM-1358)
- Submariner Azure Console support (ACM-1388)
- ManagedClusterSet consumers migrate to v1beta2 (ACM-1614)
- Submariner on disconnected ACM #22000 (ACM-1678)
- Submariner gateway: Error creating AWS security group if already exists (ACM-2055)
- Submariner gateway security group in AWS not deleted when uninstalling submariner (ACM-2057)
- The submariner-metrics-proxy pod pulls an image with wrong naming convention (ACM-2058)
- The submariner-metrics-proxy pod is not part of the Agent readiness check (ACM-2067)
- Subctl 0.14.0 prints version "vsubctl" (ACM-2132)
- managedclusters "local-cluster" not found and missing Submariner Broker CRD (ACM-2145)
- Add support of ARO to Submariner deployment (ACM-2150)
- The e2e tests execution fails for "Basic TCP connectivity" tests (ACM-2204)
- Gateway error shown "diagnose all" tests (ACM-2206)
- Submariner does not support cluster "kube-proxy ipvs mode"(ACM-2211)
- Vsphere cluster shows Pod Security admission controller warnings (ACM-2256)
- Cannot use submariner with OSP and self signed certs (ACM-2274)
- Subctl diagnose tests spawn nettest image with wrong tag nameing convention (ACM-2387)
-
Subctl 0.14.1 prints version "devel" (ACM-2482)
-
Bugs fixed (https://bugzilla.redhat.com/):
2013711 - subctl diagnose firewall metrics does not work on merged kubeconfig 2097381 - [Submariner] - Fails to increase gateway amount after deployment 2108634 - Submariner gateway node does not get deleted with subctl cloud cleanup command 2119362 - submariner GW pods are unable to resolve the DNS of the Broker K8s API URL 2124219 - Submariner gateway node does not get deployed after applying ManagedClusterAddOn on Openstack 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2130326 - unable to run subctl benchmark latency, pods fail with ImagePullBackOff 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2136442 - [IBM Z] - Submariner addon unistallation doesnt work from ACM console 2139477 - Tags on AWS security group for gateway node break cloud-controller LoadBalancer 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
ACM-1614 - ManagedClusterSet consumers migrate to v1beta2 (Submariner) ACM-2055 - Submariner gateway: Error creating AWS security group if already exists ACM-2057 - [Submariner] - submariner gateway security group in aws not deleted when uninstalling submariner ACM-2058 - [Submariner] - The submariner-metrics-proxy pod pulls an image with wrong naming convention ACM-2067 - [Submariner] - The submariner-metrics-proxy pod is not part of the Agent readiness check ACM-2132 - Subctl 0.14.0 prints version "vsubctl" ACM-2145 - managedclusters "local-cluster" not found and missing Submariner Broker CRD ACM-2150 - Add support of ARO to Submariner deployment ACM-2204 - [Submariner] - e2e tests execution fails for "Basic TCP connectivity" tests ACM-2206 - [Submariner] - Gateway error shown "diagnose all" tests ACM-2211 - [Submariner] - Submariner does not support cluster "kube-proxy ipvs mode" ACM-2256 - [Submariner] - Vsphere cluster shows Pod Security admission controller warnings ACM-2274 - Cannot use submariner with OSP and self signed certs ACM-2387 - [Submariner] - subctl diagnose tests spawn nettest image with wrong tag nameing convention ACM-2482 - Subctl 0.14.1 prints version "devel"
- This advisory contains the following OpenShift Virtualization 4.12.0 images:
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
golang: net/http: improper sanitization of Transfer-Encoding header (CVE-2022-1705)
-
golang: go/parser: stack exhaustion in all Parse* functions (CVE-2022-1962)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)
-
golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
-
golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)
-
golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working (CVE-2022-32148)
-
golang: crypto/tls: session tickets lack random ticket_age_add (CVE-2022-30629)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
RHEL-8-CNV-4.12
============= bridge-marker-container-v4.12.0-24 cluster-network-addons-operator-container-v4.12.0-24 cnv-containernetworking-plugins-container-v4.12.0-24 cnv-must-gather-container-v4.12.0-58 hco-bundle-registry-container-v4.12.0-769 hostpath-csi-driver-container-v4.12.0-30 hostpath-provisioner-container-v4.12.0-30 hostpath-provisioner-operator-container-v4.12.0-31 hyperconverged-cluster-operator-container-v4.12.0-96 hyperconverged-cluster-webhook-container-v4.12.0-96 kubemacpool-container-v4.12.0-24 kubevirt-console-plugin-container-v4.12.0-182 kubevirt-ssp-operator-container-v4.12.0-64 kubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55 kubevirt-tekton-tasks-copy-template-container-v4.12.0-55 kubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55 kubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55 kubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55 kubevirt-tekton-tasks-operator-container-v4.12.0-40 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55 kubevirt-template-validator-container-v4.12.0-32 libguestfs-tools-container-v4.12.0-255 ovs-cni-marker-container-v4.12.0-24 ovs-cni-plugin-container-v4.12.0-24 virt-api-container-v4.12.0-255 virt-artifacts-server-container-v4.12.0-255 virt-cdi-apiserver-container-v4.12.0-72 virt-cdi-cloner-container-v4.12.0-72 virt-cdi-controller-container-v4.12.0-72 virt-cdi-importer-container-v4.12.0-72 virt-cdi-operator-container-v4.12.0-72 virt-cdi-uploadproxy-container-v4.12.0-71 virt-cdi-uploadserver-container-v4.12.0-72 virt-controller-container-v4.12.0-255 virt-exportproxy-container-v4.12.0-255 virt-exportserver-container-v4.12.0-255 virt-handler-container-v4.12.0-255 virt-launcher-container-v4.12.0-255 virt-operator-container-v4.12.0-255 virtio-win-container-v4.12.0-10 vm-network-latency-checkup-container-v4.12.0-89
- Solution:
Before applying this update, you must apply all previously released errata relevant to your system.
To apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1719190 - Unable to cancel live-migration if virt-launcher pod in pending state
2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2040377 - Unable to delete failed VMIM after VM deleted
2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed
2052556 - Metric "kubevirt_num_virt_handlers_by_node_running_virt_launcher" reporting incorrect value
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2060499 - [RFE] Cannot add additional service (or other objects) to VM template
2069098 - Large scale |VMs migration is slow due to low migration parallelism
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2071491 - Storage Throughput metrics are incorrect in Overview
2072797 - Metrics in Virtualization -> Overview period is not clear or configurable
2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers
2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode
2086551 - Min CPU feature found in labels
2087724 - Default template show no boot source even there are auto-upload boot sources
2088129 - [SSP] webhook does not comply with restricted security context
2088464 - [CDI] cdi-deployment does not comply with restricted security context
2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR
2089744 - HCO should label its control plane namespace to admit pods at privileged security level
2089751 - 4.12.0 containers
2089804 - 4.12.0 rpms
2091856 - ?Edit BootSource? action should have more explicit information when disabled
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer
2093771 - The disk source should be PVC if the template has no auto-update boot source
2093996 - kubectl get vmi API should always return primary interface if exist
2094202 - Cloud-init username field should have hint
2096285 - KubeVirt CR API documentation is missing docs for many fields
2096780 - [RFE] Add ssh-key and sysprep to template scripts tab
2097436 - Online disk expansion ignores filesystem overhead change
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2099556 - [RFE] Add option to enable RDP service for windows vm
2099573 - [RFE] Improve template's message about not editable
2099923 - [RFE] Merge "SSH access" and "SSH command" into one
2100290 - Error is not dismissed on catalog review page
2100436 - VM list filtering ignores VMs in error-states
2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2100629 - Update nested support KBASE article
2100679 - The number of hardware devices is not correct in vm overview tab
2100682 - All hardware devices get deleted while just delete one
2100684 - Workload profile are not editable during creation and after creation
2101144 - VM filter has two "Other" checkboxes which are triggered together
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101167 - Edit buttons clickable area is too large.
2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state
2101390 - Easy to miss the "tick" when adding GPU device to vm via UI
2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2101423 - wrong user name on using ignition
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101445 - "Pending changes - Boot Order"
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101499 - Cannot add NIC to VM template as non-priv user
2101501 - NAME parameter in VM template has no effect.
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101667 - VMI view is not aligned with vm and tempates
2101681 - All templates are labeling "source available" in template list page
2102074 - VM Creation time on VM Overview Details card lacks string
2102125 - vm clone modal is displaying DV size instead of PVC size
2102132 - align the utilization card of single VM overview with the design
2102138 - Should the word "new" be removed from "Create new VirtualMachine from catalog"?
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102475 - Template 'vm-template-example' should be filtered by 'Fedora' rather than 'Other'
2102561 - sysprep-info should link to downstream doc
2102737 - Clone a VM should lead to vm overview tab
2102740 - "Save" button on vm clone modal should be "Clone"
2103806 - "404: Not Found" appears shortly by clicking the PVC link on vm disk tab
2103807 - PVC is not named by VM name while creating vm quickly
2103817 - Workload profile values in vm details should align with template's value
2103844 - VM nic model is empty
2104331 - VM list page scroll up automatically
2104402 - VM create button is not enabled while adding multiple environment disks
2104422 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2104424 - Enable descheduler or hide it on template's scheduling tab
2104479 - [4.12] Cloned VM's snapshot restore fails if the source VM disk is deleted
2104480 - Alerts in VM overview tab disappeared after a few seconds
2104785 - "Add disk" and "Disks" are on the same line
2104859 - [RFE] Add "Copy SSH command" to VM action list
2105257 - Can't set log verbosity level for virt-operator pod
2106175 - All pages are crashed after visit Virtualization -> Overview
2106963 - Cannot add configmap for windows VM
2107279 - VM Template's bootable disk can be marked as bootable
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse functions
2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
2108339 - datasource does not provide timestamp when updated
2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2109818 - Upstream metrics documentation is not detailed enough
2109975 - DataVolume fails to import "cirros-container-disk-demo" image
2110256 - Storage -> PVC -> upload data, does not support source reference
2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls
2111240 - GiB changes to B in Template's Edit boot source reference modal
2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111328 - kubevirt plugin console crashed after visit vmi page
2111378 - VM SSH command generated by UI points at api VIP
2111744 - Cloned template should not label app.kubernetes.io/name: common-templates
2111794 - the virtlogd process is taking too much RAM! (17468Ki > 17Mi)
2112900 - button style are different
2114516 - Nothing happens after clicking on Fedora cloud image list link
2114636 - The style of displayed items are not unified on VM tabs
2114683 - VM overview tab is crashed just after the vm is created
2115257 - Need to Change system-product-name to "OpenShift Virtualization" in CNV-4.12
2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass
2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items
2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates
2116225 - The filter keyword of the related operator 'Openshift Data Foundation' is 'OCS' rather than 'ODF'
2116644 - Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found"
2117549 - Cannot edit cloud-init data after add ssh key
2117803 - Cannot edit ssh even vm is stopped
2117813 - Improve descriptive text of VM details while VM is off
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
2118257 - outdated doc link tolerations modal
2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format
2119069 - Unable to start windows VMs on PSI setups
2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2119309 - readinessProbe in VM stays on failed
2119615 - Change the disk size causes the unit changed
2120907 - Cannot filter disks by label
2121320 - Negative values in migration metrics
2122236 - Failing to delete HCO with SSP sticking around
2122990 - VMExport should check APIGroup
2124147 - "ReadOnlyMany" should not be added to supported values in memory dump
2124307 - Ui crash/stuck on loading when trying to detach disk on a VM
2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it
2124555 - View documentation link on MigrationPolicies page des not work
2124557 - MigrationPolicy description is not displayed on Details page
2124558 - Non-privileged user can start MigrationPolicy creation
2124565 - Deleted DataSource reappears in list
2124572 - First annotation can not be added to DataSource
2124582 - Filtering VMs by OS does not work
2124594 - Docker URL validation is inconsistent over application
2124597 - Wrong case in Create DataSource menu
2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile
2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state
2127787 - Expose the PVC source of the dataSource on UI
2127843 - UI crashed by selecting "Live migration network"
2127931 - Change default time range on Virtualization -> Overview -> Monitoring dashboard to 30 minutes
2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer
2128002 - Error after VM template deletion
2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards
2128872 - [4.11]Can't restore cloned VM
2128948 - Cannot create DataSource from default YAML
2128949 - Cannot create MigrationPolicy from example YAML
2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2129013 - Mark Windows 11 as TechPreview
2129234 - Service is not deleted along with the VM when the VM is created from a template with service
2129301 - Cloud-init network data don't wipe out on uncheck checkbox 'Add network data'
2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook
2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV
2130588 - crypto-policy : Common Ciphers support by apiserver and hco
2130695 - crypto-policy : Logging Improvement and publish the source of ciphers
2130909 - Non-privileged user can start DataSource creation
2131157 - KV data transfer rate chart in VM Metrics tab is not displayed
2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough
2131674 - Bump virtlogd memory requirement to 20Mi
2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11
2132682 - Default YAML entity name convention.
2132721 - Delete dialogs
2132744 - Description text is missing in Live Migrations section
2132746 - Background is broken in Virtualization Monitoring page
2132783 - VM can not be created from Template with edited boot source
2132793 - Edited Template BSR is not saved
2132932 - Typo in PVC size units menu
2133540 - [pod security violation audit] Audit violation in "cni-plugins" container should be fixed
2133541 - [pod security violation audit] Audit violation in "bridge-marker" container should be fixed
2133542 - [pod security violation audit] Audit violation in "manager" container should be fixed
2133543 - [pod security violation audit] Audit violation in "kube-rbac-proxy" container should be fixed
2133655 - [pod security violation audit] Audit violation in "cdi-operator" container should be fixed
2133656 - [4.12][pod security violation audit] Audit violation in "hostpath-provisioner-operator" container should be fixed
2133659 - [pod security violation audit] Audit violation in "cdi-controller" container should be fixed
2133660 - [pod security violation audit] Audit violation in "cdi-source-update-poller" container should be fixed
2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod
2134672 - [e2e] add data-test-id for catalog -> storage section
2134825 - Authorization for expand-spec endpoint missing
2135805 - Windows 2022 template is missing vTPM and UEFI params in spec
2136051 - Name jumping when trying to create a VM with source from catalog
2136425 - Windows 11 is detected as Windows 10
2136534 - Not possible to specify a TTL on VMExports
2137123 - VMExport: export pod is not PSA complaint
2137241 - Checkbox about delete vm disks is not loaded while deleting VM
2137243 - registery input add docker prefix twice
2137349 - "Manage source" action infinitely loading on DataImportCron details page
2137591 - Inconsistent dialog headings/titles
2137731 - Link of VM status in overview is not working
2137733 - No link for VMs in error status in "VirtualMachine statuses" card
2137736 - The column name "MigrationPolicy name" can just be "Name"
2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly
2138112 - Unsupported S3 endpoint option in Add disk modal
2138119 - "Customize VirtualMachine" flow is not user-friendly because settings are split into 2 modals
2138199 - Win11 and Win22 templates are not filtered properly by Template provider
2138653 - Saving Template prameters reloads the page
2138657 - Setting DATA_SOURCE_ Template parameters makes VM creation fail
2138664 - VM that was created with SSH key fails to start
2139257 - Cannot add disk via "Using an existing PVC"
2139260 - Clone button is disabled while VM is running
2139293 - Non-admin user cannot load VM list page
2139296 - Non-admin cannot load MigrationPolicies page
2139299 - No auto-generated VM name while creating VM by non-admin user
2139306 - Non-admin cannot create VM via customize mode
2139479 - virtualization overview crashes for non-priv user
2139574 - VM name gets "emptyname" if click the create button quickly
2139651 - non-priv user can click create when have no permissions
2139687 - catalog shows template list for non-priv users
2139738 - [4.12]Can't restore cloned VM
2139820 - non-priv user cant reach vm details
2140117 - Provide upgrade path from 4.11.1->4.12.0
2140521 - Click the breadcrumb list about "VirtualMachines" goes to undefined project
2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user
2140627 - Not able to select storageClass if there is no default storageclass defined
2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user
2140808 - Hyperv feature set to "enabled: false" prevents scheduling
2140977 - Alerts number is not correct on Virtualization overview
2140982 - The base template of cloned template is "Not available"
2140998 - Incorrect information shows in overview page per namespace
2141089 - Unable to upload boot images.
2141302 - Unhealthy states alerts and state metrics are missing
2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations
2141494 - "Start in pause mode" option is not available while creating the VM
2141654 - warning log appearing on VMs: found no SR-IOV networks
2141711 - Node column selector is redundant for non-priv user
2142468 - VM action "Stop" should not be disabled when VM in pause state
2142470 - Delete a VM or template from all projects leads to 404 error
2142511 - Enhance alerts card in overview
2142647 - Error after MigrationPolicy deletion
2142891 - VM latency checkup: Failed to create the checkup's Job
2142929 - Permission denied when try get instancestypes
2143268 - Topolvm storageProfile missing accessModes and volumeMode
2143498 - Could not load template while creating VM from catalog
2143964 - Could not load template while creating VM from catalog
2144580 - "?" icon is too big in VM Template Disk tab
2144828 - "?" icon is too big in VM Template Disk tab
2144839 - Alerts number is not correct on Virtualization overview
2153849 - After upgrade to 4.11.1->4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten
2155757 - Incorrect upstream-version label "v1.6.0-unstable-410-g09ea881c" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container
- Description:
The rh-sso-7/sso76-openshift-rhel8 container image and rh-sso-7/sso7-rhel8-operator operator has been updated for RHEL-8 based Middleware Containers to address the following security issues. Users of these images are also encouraged to rebuild all container images that depend on these images.
Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):
2138971 - CVE-2022-3782 keycloak: path traversal via double URL encoding 2141404 - CVE-2022-3916 keycloak: Session takeover with OIDC offline refreshtokens
- JIRA issues fixed (https://issues.jboss.org/):
CIAM-4412 - Build new OCP image for rh-sso-7/sso76-openshift-rhel8 CIAM-4413 - Generate new operator bundle image for this patch
- Summary:
An update is now available for Migration Toolkit for Runtimes (v1.0.1). Bugs fixed (https://bugzilla.redhat.com/):
2142707 - CVE-2022-42920 Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing
- Bugs fixed (https://bugzilla.redhat.com/):
2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2148199 - CVE-2022-39278 Istio: Denial of service attack via a specially crafted message 2148661 - CVE-2022-3962 kiali: error message spoofing in kiali UI 2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be
- JIRA issues fixed (https://issues.jboss.org/):
OSSM-1977 - Support for Istio Gateway API in Kiali OSSM-2083 - Update maistra/istio 2.3 to Istio 1.14.5 OSSM-2147 - Unexpected validation message on Gateway object OSSM-2169 - Member controller doesn't retry on conflict OSSM-2170 - Member namespaces aren't cleaned up when a cluster-scoped SMMR is deleted OSSM-2179 - Wasm plugins only support OCI images with 1 layer OSSM-2184 - Istiod isn't allowed to delete analysis distribution report configmap OSSM-2188 - Member namespaces not cleaned up when SMCP is deleted OSSM-2189 - If multiple SMCPs exist in a namespace, the controller reconciles them all OSSM-2190 - The memberroll controller reconciles SMMRs with invalid name OSSM-2232 - The member controller reconciles ServiceMeshMember with invalid name OSSM-2241 - Remove v2.0 from Create ServiceMeshControlPlane Form OSSM-2251 - CVE-2022-3962 openshift-istio-kiali-container: kiali: content spoofing [ossm-2.3] OSSM-2308 - add root CA certificates to kiali container OSSM-2315 - be able to customize openshift auth timeouts OSSM-2324 - Gateway injection does not work when pods are created by cluster admins OSSM-2335 - Potential hang using Traces scatterplot chart OSSM-2338 - Federation deployment does not need router mode sni-dnat OSSM-2344 - Restarting istiod causes Kiali to flood CRI-O with port-forward requests OSSM-2375 - Istiod should log member namespaces on every update OSSM-2376 - ServiceMesh federation stops working after the restart of istiod pod OSSM-535 - Support validationMessages in SMCP OSSM-827 - ServiceMeshMembers point to wrong SMCP name
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.6.3 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):
2129679 - clusters belong to global clusterset is not selected by placement when rescheduling 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2139085 - RHACM 2.6.3 images 2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements
The following advisory data is extracted from:
https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0254.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment.
Description:
The rsync utility enables the users to copy and synchronize files locally or across a network. Synchronization with rsync is fast because rsync only sends the differences in files over the network instead of sending whole files. The rsync utility is also used as a mirroring tool
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202208-0404",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.1"
},
{
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.3.16"
},
{
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.1"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "9.1"
},
{
"model": "hci compute node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "management services for element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "37"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.1"
},
{
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.7.31"
},
{
"model": "oncommand workflow automation",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "storagegrid",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.11.22"
},
{
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.3.0"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.11.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.1"
},
{
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.6.0"
},
{
"model": "iphone os",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "16.0"
},
{
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "16.1"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.6.3"
},
{
"model": "hci",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.7.34"
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"model": "zlib",
"scope": "lte",
"trust": 1.0,
"vendor": "zlib",
"version": "1.2.12"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:zlib:zlib:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "1.2.12",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:37:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:windows:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:management_services_for_element_software:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.7.1",
"versionStartIncluding": "11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "16.1",
"versionStartIncluding": "16.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.6.1",
"versionStartIncluding": "12.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.7.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.7.1",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.6.3",
"versionStartIncluding": "4.6.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.3.16",
"versionStartIncluding": "4.3.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.11.22",
"versionStartIncluding": "3.11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.7.34",
"versionStartIncluding": "3.7.31",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "PACKETSTORM",
"id": "173107"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "170898"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "170759"
},
{
"db": "PACKETSTORM",
"id": "170806"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "176559"
}
],
"trust": 1.1
},
"cve": "CVE-2022-37434",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-37434",
"trust": 1.0,
"value": "CRITICAL"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference). \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* github.com/Masterminds/vcs: Command Injection via argument injection\n(CVE-2022-21235)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, ppc64le, and aarch64 architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags. \n\nThe sha values for the release are\n\n(For x86_64 architecture)\nThe image digest is\nsha256:c6771b12bd873c0e3e5fbc7afa600d92079de6534dcb52f09cb1d22ee49608a9\n\n(For s390x architecture)\nThe image digest is\nsha256:622b5361f95d1d512ea84f363ac06155cbb9ee28e85ccaae1acd80b98b660fa8\n\n(For ppc64le architecture)\nThe image digest is\nsha256:50c131cf85dfb00f258af350a46b85eff8fb8084d3e1617520cd69b59caeaff7\n\n(For aarch64 architecture)\nThe image digest is\nsha256:9e575c4ece9caaf31acbef246ccad71959cd5bf634e7cb284b0849ddfa205ad7\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2215317 - CVE-2022-21235 github.com/Masterminds/vcs: Command Injection via argument injection\n\n5. JIRA issues fixed (https://issues.redhat.com/):\n\nOCPBUGS-15446 - (release-4.11) gather \"gateway-mode-config\" config map from \"openshift-network-operator\" namespace\nOCPBUGS-15532 - visiting Configurations page returns error Cannot read properties of undefined (reading \u0027apiGroup\u0027)\nOCPBUGS-15645 - Can\u0027t use git lfs in BuildConfig git source with strategy Docker\nOCPBUGS-15739 - Environment cannot find Python\nOCPBUGS-15758 - [release-4.11] Bump Jenkins and Jenkins Agent Base image versions\nOCPBUGS-15942 - 9% of OKD tests failing on error: tag latest failed: Internal error occurred: registry.centos.org/dotnet/dotnet-31-centos7:latest: Get \"https://registry.centos.org/v2/\": dial tcp: lookup registry.centos.org on 172.30.0.10:53: no such host\nOCPBUGS-15966 - [4.12] MetalLB contains incorrect data Correct and incorrect MetalLB resources coexist should have correct statuses\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.13.0 security and bug fix update\nAdvisory ID: RHSA-2023:3742-02\nProduct: Red Hat OpenShift Data Foundation\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:3742\nIssue date: 2023-06-21\nCVE Names: CVE-2015-20107 CVE-2018-25032 CVE-2020-10735 \n CVE-2020-16250 CVE-2020-16251 CVE-2020-17049 \n CVE-2021-3765 CVE-2021-3807 CVE-2021-4231 \n CVE-2021-4235 CVE-2021-4238 CVE-2021-28861 \n CVE-2021-43519 CVE-2021-43998 CVE-2021-44531 \n CVE-2021-44532 CVE-2021-44533 CVE-2021-44964 \n CVE-2021-46828 CVE-2021-46848 CVE-2022-0670 \n CVE-2022-1271 CVE-2022-1304 CVE-2022-1348 \n CVE-2022-1586 CVE-2022-1587 CVE-2022-2309 \n CVE-2022-2509 CVE-2022-2795 CVE-2022-2879 \n CVE-2022-2880 CVE-2022-3094 CVE-2022-3358 \n CVE-2022-3515 CVE-2022-3517 CVE-2022-3715 \n CVE-2022-3736 CVE-2022-3821 CVE-2022-3924 \n CVE-2022-4415 CVE-2022-21824 CVE-2022-23540 \n CVE-2022-23541 CVE-2022-24903 CVE-2022-26280 \n CVE-2022-27664 CVE-2022-28805 CVE-2022-29154 \n CVE-2022-30635 CVE-2022-31129 CVE-2022-32189 \n CVE-2022-32190 CVE-2022-33099 CVE-2022-34903 \n CVE-2022-35737 CVE-2022-36227 CVE-2022-37434 \n CVE-2022-38149 CVE-2022-38900 CVE-2022-40023 \n CVE-2022-40303 CVE-2022-40304 CVE-2022-40897 \n CVE-2022-41316 CVE-2022-41715 CVE-2022-41717 \n CVE-2022-41723 CVE-2022-41724 CVE-2022-41725 \n CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 \n CVE-2022-42898 CVE-2022-42919 CVE-2022-43680 \n CVE-2022-45061 CVE-2022-45873 CVE-2022-46175 \n CVE-2022-47024 CVE-2022-47629 CVE-2022-48303 \n CVE-2022-48337 CVE-2022-48338 CVE-2022-48339 \n CVE-2023-0361 CVE-2023-0620 CVE-2023-0665 \n CVE-2023-2491 CVE-2023-22809 CVE-2023-24329 \n CVE-2023-24999 CVE-2023-25000 CVE-2023-25136 \n=====================================================================\n\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available in Red Hat Container Registry for Red Hat OpenShift Data\nFoundation 4.13.0 on Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as\nrandom as they should be (CVE-2021-4238)\n\n* decode-uri-component: improper input validation resulting in DoS\n(CVE-2022-38900)\n\n* vault: Hashicorp Vault AWS IAM Integration Authentication Bypass\n(CVE-2020-16250)\n\n* vault: GCP Auth Method Allows Authentication Bypass (CVE-2020-16251)\n\n* nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching\nANSI escape codes (CVE-2021-3807)\n\n* go-yaml: Denial of Service in go-yaml (CVE-2021-4235)\n\n* vault: incorrect policy enforcement (CVE-2021-43998)\n\n* nodejs: Improper handling of URI Subject Alternative Names\n(CVE-2021-44531)\n\n* nodejs: Certificate Verification Bypass via String Injection\n(CVE-2021-44532)\n\n* nodejs: Incorrect handling of certificate subject and issuer fields\n(CVE-2021-44533)\n\n* golang: archive/tar: unbounded memory consumption when reading headers\n(CVE-2022-2879)\n\n* golang: net/http/httputil: ReverseProxy should not forward unparseable\nquery parameters (CVE-2022-2880)\n\n* nodejs-minimatch: ReDoS via the braceExpand function (CVE-2022-3517)\n\n* jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to\nsignature validation bypass (CVE-2022-23540)\n\n* jsonwebtoken: Insecure implementation of key retrieval function could\nlead to Forgeable Public/Private Tokens from RSA to HMAC (CVE-2022-23541)\n\n* golang: net/http: handle server errors after sending GOAWAY\n(CVE-2022-27664)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/url: JoinPath does not strip relative path components in all\ncircumstances (CVE-2022-32190)\n\n* consul: Consul Template May Expose Vault Secrets When Processing Invalid\nInput (CVE-2022-38149)\n\n* vault: insufficient certificate revocation list checking (CVE-2022-41316)\n\n* golang: regexp/syntax: limit memory used by parsing regexps\n(CVE-2022-41715)\n\n* golang: net/http: excessive memory growth in a Go server accepting HTTP/2\nrequests (CVE-2022-41717)\n\n* net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK\ndecoding (CVE-2022-41723)\n\n* golang: crypto/tls: large handshake records may cause panics\n(CVE-2022-41724)\n\n* golang: net/http, mime/multipart: denial of service from excessive\nresource consumption (CVE-2022-41725)\n\n* json5: Prototype Pollution in JSON5 via Parse Method (CVE-2022-46175)\n\n* vault: Vault\u2019s Microsoft SQL Database Storage Backend Vulnerable to SQL\nInjection Via Configuration File (CVE-2023-0620)\n\n* hashicorp/vault: Vault\u2019s PKI Issuer Endpoint Did Not Correctly Authorize\nAccess to Issuer Metadata (CVE-2023-0665)\n\n* Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to\nRole During a Destroy Operation (CVE-2023-24999)\n\n* hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n(CVE-2023-25000)\n\n* validator: Inefficient Regular Expression Complexity in Validator.js\n(CVE-2021-3765)\n\n* nodejs: Prototype pollution via console.table properties (CVE-2022-21824)\n\n* golang: math/big: decoding big.Float and big.Rat types can panic if the\nencoded message is too short, potentially allowing a denial of service\n(CVE-2022-32189)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images that provide numerous bug fixes and enhancements. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1786696 - UI-\u003eDashboards-\u003eOverview-\u003eAlerts shows MON components are at different versions, though they are NOT\n1855339 - Wrong version of ocs-storagecluster\n1943137 - [Tracker for BZ #1945618] rbd: Storage is not reclaimed after persistentvolumeclaim and job that utilized it are deleted\n1944687 - [RFE] KMS server connection lost alert\n1989088 - [4.8][Multus] UX experience issues and enhancements\n2005040 - Uninstallation of ODF StorageSystem via OCP Console fails, gets stuck in Terminating state\n2005830 - [DR] DRPolicy resource should not be editable after creation\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2028193 - CVE-2021-43998 vault: incorrect policy enforcement\n2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names\n2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection\n2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields\n2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties\n2042914 - [Tracker for BZ #2013109] [UI] Refreshing web console from the pop-up is taking to Install Operator page. \n2052252 - CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2022-21824 [CVE] nodejs: various flaws [openshift-data-foundation-4]\n2101497 - ceph_mon_metadata metrics are not collected properly\n2101916 - must-gather is not collecting ceph logs or coredumps\n2102304 - [GSS] Remove the entry of removed node from Storagecluster under Node Topology\n2104148 - route ocs-storagecluster-cephobjectstore misconfigured to use http and https on same http route in haproxy.config\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2115020 - [RDR] Sync schedule is not removed from mirrorpeer yaml after DR Policy is deleted\n2115616 - [GSS] failing to change ownership of the NFS based PVC for PostgreSQL pod by using kube_pv_chown utility\n2119551 - CVE-2022-38149 consul: Consul Template May Expose Vault Secrets When Processing Invalid Input\n2120098 - [RDR] Even before an action gets fully completed, PeerReady and Available are reported as True in the DRPC yaml\n2120944 - Large Omap objects found in pool \u0027ocs-storagecluster-cephfilesystem-metadata\u0027\n2124668 - CVE-2022-32190 golang: net/url: JoinPath does not strip relative path components in all circumstances\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2126299 - CVE-2021-3765 validator: Inefficient Regular Expression Complexity in Validator.js\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking\n2139037 - [cee/sd]Unable to access s3 via RGW route ocs-storagecluster-cephobjectstore\n2141095 - [RDR] Storage System page on ACM Hub is visible even when data observability is not enabled\n2142651 - RFE: OSDs need ability to bind to a service IP instead of the pod IP to support RBD mirroring in OCP clusters\n2142894 - Credentials are ignored when creating a Backing/Namespace store after prompted to enter a name for the resource\n2142941 - RGW cloud Transition. HEAD/GET requests to MCG are failing with 403 error\n2143944 - [GSS] unknown parameter name \"FORCE_OSD_REMOVAL\"\n2144256 - [RDR] [UI] DR Application applied to a single DRPolicy starts showing connected to multiple policies due to console flickering\n2151903 - [MCG] Azure bs/ns creation fails with target bucket does not exists\n2152143 - [Noobaa Clone] Secrets are used in env variables\n2154250 - NooBaa Bucket Quota alerts are not working\n2155507 - RBD reclaimspace job fails when the PVC is not mounted\n2155743 - ODF Dashboard fails to load\n2156067 - [RDR] [UI] When Peer Ready isn\u0027t True, UI doesn\u0027t reset the error message even when no subscription group is selected\n2156069 - [UI] Instances of OCS can be seen on BlockPool action modals\n2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method\n2156519 - 4.13: odf-csi-addons-operator failed with OwnNamespace InstallModeType not supported\n2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2157876 - [OCP Tracker] [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn\u0027t appear after ODF upgrade resulting in dashboard crash\n2158922 - Namespace store fails to get created via the ODF UI\n2159676 - rbd-mirror logs are rotated very frequently, increase the default maxlogsize for rbd-mirror\n2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests\n2161879 - logging issue when deleting webhook resources\n2161937 - collect kernel and journal logs from all worker nodes\n2162257 - [RDR][CEPHFS] sync/replication is getting stopped for some pvc\n2164617 - Unable to expand ocs-storagecluster-ceph-rbd PVCs provisioned in Filesystem mode\n2165495 - Placement scheduler is using too much resources\n2165504 - Sizer sharing link is broken\n2165929 - [RFE] ODF bluewash introduction in 4.12.x\n2165938 - ocs-operator CSV is missing disconnected env annotation. \n2165984 - [RDR] Replication stopped for images is represented with incorrect color\n2166222 - CSV is missing disconnected env annotation and relatedImages spec\n2166234 - Application user unable to invoke Failover and Relocate actions\n2166869 - Match the version of consoleplugin to odf operator\n2167299 - [RFE] ODF bluewash introduction in 4.12.x\n2167308 - [mcg-clone] Security and VA issues with ODF operator\n2167337 - CVE-2020-16250 vault: Hashicorp Vault AWS IAM Integration Authentication Bypass\n2167340 - CVE-2020-16251 vault: GCP Auth Method Allows Authentication Bypass\n2167946 - CSV is missing disconnected env annotation and relatedImages spec\n2168113 - [Ceph Tracker BZ #2141110] [cee/sd][Bluestore] Newly deployed bluestore OSD\u0027s showing high fragmentation score\n2168635 - fix redirect link to operator details page (OCS dashboard)\n2168840 - [Fusion-aaS][ODF 4.13]Within \u0027prometheus-ceph-rules\u0027 the namespace for \u0027rook-ceph-mgr\u0027 jobs should be configurable. \n2168849 - Must-gather doesn\u0027t collect coredump logs crucial for OSD crash events\n2169375 - CVE-2022-23541 jsonwebtoken: Insecure implementation of key retrieval function could lead to Forgeable Public/Private Tokens from RSA to HMAC\n2169378 - CVE-2022-23540 jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to signature validation bypass\n2169779 - [vSphere]: rook-ceph-mon-* pvc are in pending state\n2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS\n2170673 - [RDR] Different replication states of PVC images aren\u0027t correctly distinguished and representated on UI\n2172089 - [Tracker for Ceph BZ 2174461] rook-ceph-nfs pod is stuck at status \u0027CreateContainerError\u0027 after enabling NFS in ODF 4.13\n2172365 - [csi-addons] odf-csi-addons-operator oomkilled with fresh installation 4.12\n2172521 - No OSD pods are created for 4.13 LSO deployment\n2173161 - ODF-console can not start when you disable IPv6 on Node with kernel parameter. \n2173528 - Creation of OCS operator tag automatically for verified commits\n2173534 - When on StorageSystem details click on History back btn it shows blank body\n2173926 - [RFE] Include changes in MCG for new Ceph RGW transition headers\n2175612 - noobaa-core-0 crashing and storagecluster not getting to ready state during ODF deployment with FIPS enabled in 4.13cluster\n2175685 - RGW OBC creation via the UI is blocked by \"Address form errors to proceed\" error\n2175714 - UI fix- capitalization\n2175867 - Rook sets cephfs kernel mount options even when mon is using v1 port\n2176080 - odf must-gather should collect output of oc get hpa -n openshift-storage\n2176456 - [RDR] ramen-hub-operator and ramen-dr-cluster-operator is going into CLBO post deployment\n2176739 - [UI] CSI Addons operator icon is broken\n2176776 - Enable save options only when the protected apps has labels for manage DRPolicy\n2176798 - [IBM Z ] Multi Cluster Orchestrator operator is not available in the Operator Hub\n2176809 - [IBM Z ] DR operator is not available in the Operator Hub\n2177134 - Next button if disabled for storage system deployment flow for IBM Ceph Storage security and network step when there is no OCS installed already\n2177221 - Enable DR dashboard only when ACM observability is enabled\n2177325 - Noobaa-db pod is taking longer time to start up in ODF 4.13\n2177695 - DR dashbaord showing incorrect RPO data\n2177844 - CVE-2023-24999 Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to Role During a Destroy Operation\n2178033 - node topology warnings tab doesn\u0027t show pod warnings\n2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding\n2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption\n2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics\n2178588 - No rack names on ODF Topology\n2178619 - odf-operator failing to resolve its sub-dependencies leaving the ocs-consumer/provider addon in a failed and halted state\n2178682 - [GSS] Add the valid AWS GovCloud regions in OCS UI. \n2179133 - [UI] A blank page appears while selecting Storage Pool for creating Encrypted Storage Class\n2179337 - Invalid storage system href link on the ODF multicluster dashboard\n2179403 - (4.13) Mons are failing to start when msgr2 is required with RHCS 6.1\n2179846 - [IBM Z] In RHCS external mode Cephobjectstore creation fails as it reports that the \"object store name cannot be longer than 38 characters\"\n2179860 - [MCG] Bucket replication with deletion sync isn\u0027t complete\n2179976 - [ODF 4.13] Missing the status-reporter binary causing pods \"report-status-to-provider\" remain in CreateContainerError on ODF to ODF cluster on ROSA\n2179981 - ODF Topology search bar mistakes to find searched node/pod\n2179997 - Topology. Exit full screen does not appear in Full screen mode\n2180211 - StorageCluster stuck in progressing state for Thales KMS deployment\n2180397 - Last sync time is missing on application set\u0027s disaster recovery status popover\n2180440 - odf-monitoring-tool. YAML file misjudged as corrupted\n2180921 - Deployment with external cluster in ODF 4.13 with unable to use cephfs as backing store for image_registry\n2181112 - [RDR] [UI] Hide disable DR functionality as it would be un-tested in 4.13\n2181133 - CI: backport E2E job improvements\n2181446 - [KMS][UI] PVC provisioning failed in case of vault kubernetes authentication is configured. \n2181535 - [GSS] Object storage in degraded state\n2181551 - Build: move to \u0027dependencies\u0027 the ones required for running a build\n2181832 - Create OBC via UI, placeholder on StorageClass dropped\n2181949 - [ODF Tracker] [RFE] Catch MDS damage to the dentry\u0027s first snapid\n2182041 - OCS-Operator expects NooBaa CRDs to be present on the cluster when installed directly without ODF Operator\n2182296 - [Fusion-aaS][ODF 4.13]must-gather does not collect relevant logs when storage cluster is not in openshift-storage namespace\n2182375 - [MDR] Not able to fence DR clusters\n2182644 - [IBM Z] MDR policy creation fails unless the ocs-operator pod is restarted on the managed clusters\n2182664 - Topology view should hide the sidebar when changing levels\n2182703 - [RDR] After upgrading from 4.12.2 to 4.13.0 version.odf.openshift.io cr is not getting updated with latest ODF version\n2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata\n2183155 - failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call\n2183196 - [Fusion-aaS] Collect Must-gather logs from the managed-fusion agent namesapce\n2183266 - [Fusion aaS Rook ODF 4.13]] Rook-ceph-operator pod should allow OBC CRDs to be optional instead of causing a crash when not present\n2183457 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]\n2183478 - [MDR][UI] Cannot relocate subscription based apps, Appset based apps are possible to relocate\n2183520 - [Fusion-aaS] csi-cephfs-plugin pods are not created after installing ocs-client-operator\n2184068 - [Fusion-aaS] Failed to mount CephFS volumes while creating pods\n2184605 - [ODF 4.13][Fusion-aaS] OpenShift Data Foundation Client operator is listed in OperatorHub and installable from UI\n2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File\n2184769 - {Fusion-aaS][ODF 4.13]Remove storageclassclaim cr and create new cr storageclass request cr\n2184773 - multicluster-orchestrator should not reset spec.network.multiClusterService.Enabled field added by user\n2184892 - Don\u0027t pass encryption options to ceph cluster in odf external mode to provider/consumer cluster\n2184984 - Topology Sidebar alerts panel: alerts accordion does not toggle when clicking on alert severity text\n2185164 - [KMS][VAULT] PVC provisioning is failing when the Vault (HCP) Kubernetes authentication is set. \n2185188 - Fix storagecluster watch request for OCSInitialization\n2185757 - add NFS dashboard\n2185871 - [MDR][ACM-Tracker] Deleting an Appset based application does not delete its placement\n2186171 - [GSS] \"disableLoadBalancerService: true\" config is reconciled after modifying the number of NooBaa endpoints\n2186225 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]\n2186475 - handle different network connection spec \u0026 Pass appropriate options for all the cases of Network Spec\n2186752 - [translations] add translations for 4.13\n2187251 - sync ocs and odf with the latest rook\n2187296 - [MCG] Can\u0027t opt out of deletions sync once log-based replication with deletions sync is set\n2187736 - [RDR] Replication history graph is showing incorrect value\n2187952 - When cluster controller is cancelled frequently, multiple simultaneous controllers cause issues since need to wait for shutdown before continuing new controller\n2187969 - [ODFMS-Migration ] [OCS Client Operator] csi-rbdplugin stuck in ImagePullBackOff on consumer clusters after Migration\n2187986 - [MDR] ramen-dr-cluster-operator pod is in CLBO after assigning dr policy to an appset based app\n2188053 - ocs-metrics-exporter cannot list/watch StorageCluster, StorageClass, CephBlockPool and other resources\n2188238 - [RDR] Avoid using the terminologies \"SLA\" in DR dashbaord\n2188303 - [RDR] Maintenance mode is not enabled after initiating failover action\n2188427 - [External mode upgrade]: Upgrade from 4.12 -\u003e 4.13 external mode is failing because rook-ceph-operator is not reaching clean state\n2188666 - wrong label in new storageclassrequest cr\n2189483 - After upgrade noobaa-db-pg-0 pod using old image in one of container\n2189929 - [RDR/MDR] [UI] Dashboard fon size are very uneven\n2189982 - [RDR] ocs_rbd_client_blocklisted datapoints and the corresponding alert is not getting generated\n2189984 - [KMS][VAULT] Storage cluster remains in \u0027Progressing\u0027 state during deployment with storage class encryption, despite all pods being up and running. \n2190129 - OCS Provider Server logs are incorrect\n2190241 - nfs metric details are unavailable and server health is displaying as \"Degraded\" under Network file system tab in UI\n2192088 - [IBM P] rbd_default_map_options value not set to ms_mode=secure in in-transit encryption enabled ODF cluster\n2192670 - Details tab for nodes inside Topology throws \"Something went wrong\" on IBM Power platform\n2192824 - [4.13] Fix Multisite in external cluster\n2192875 - Enable ceph-exporter in rook\n2193114 - MCG replication is failing due to OC binary incompatible on Power platform\n2193220 - [Stretch cluster] CephCluster is updated frequently due to changing ordering of zones\n2196176 - MULTUS UI, There is no option to change the multus configuration after we configure the params\n2196236 - [RDR] With ACM 2.8 User is not able to apply Drpolicy to subscription workload\n2196298 - [RDR] DRPolicy doesn\u0027t show connected application when subscription based workloads are deployed via CLI\n2203795 - ODF Monitoring is missing some of the ceph_* metric values\n2208029 - nfs server health is always displaying as \"Degraded\" under Network file system tab in UI. \n2208079 - rbd mirror daemon is commonly not upgraded\n2208269 - [RHCS Tracker] After add capacity the rebalance does not complete, and we see 2 PGs in active+clean+scrubbing and 1 active+clean+scrubbing+deep\n2208558 - [MDR] ramen-dr-cluster-operator pod crashes during failover\n2208962 - [UI] ODF Topology. Degraded cluster don\u0027t show red canvas on cluster level\n2209364 - ODF dashboard crashes when OCP and ODF are upgraded\n2209643 - Multus, Cephobjectstore stuck on Progressing state because \" failed to create or retrieve rgw admin ops user\"\n2209695 - When collecting Must-gather logs shows /usr/bin/gather_ceph_resources: line 341: jq: command not found\n2210964 - [UI][MDR] After hub recovery in overview tab of data policies Application set apps count is not showing\n2211334 - The replication history graph is very unclear\n2211343 - [MCG-Only]: upgrade failed from 4.12 to 4.13 due to missing CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs-operator-config\n2211704 - Multipart uploads fail to a Azure namespace bucket when user MD is sent as part of the upload\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2015-20107\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2020-10735\nhttps://access.redhat.com/security/cve/CVE-2020-16250\nhttps://access.redhat.com/security/cve/CVE-2020-16251\nhttps://access.redhat.com/security/cve/CVE-2020-17049\nhttps://access.redhat.com/security/cve/CVE-2021-3765\nhttps://access.redhat.com/security/cve/CVE-2021-3807\nhttps://access.redhat.com/security/cve/CVE-2021-4231\nhttps://access.redhat.com/security/cve/CVE-2021-4235\nhttps://access.redhat.com/security/cve/CVE-2021-4238\nhttps://access.redhat.com/security/cve/CVE-2021-28861\nhttps://access.redhat.com/security/cve/CVE-2021-43519\nhttps://access.redhat.com/security/cve/CVE-2021-43998\nhttps://access.redhat.com/security/cve/CVE-2021-44531\nhttps://access.redhat.com/security/cve/CVE-2021-44532\nhttps://access.redhat.com/security/cve/CVE-2021-44533\nhttps://access.redhat.com/security/cve/CVE-2021-44964\nhttps://access.redhat.com/security/cve/CVE-2021-46828\nhttps://access.redhat.com/security/cve/CVE-2021-46848\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1304\nhttps://access.redhat.com/security/cve/CVE-2022-1348\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1587\nhttps://access.redhat.com/security/cve/CVE-2022-2309\nhttps://access.redhat.com/security/cve/CVE-2022-2509\nhttps://access.redhat.com/security/cve/CVE-2022-2795\nhttps://access.redhat.com/security/cve/CVE-2022-2879\nhttps://access.redhat.com/security/cve/CVE-2022-2880\nhttps://access.redhat.com/security/cve/CVE-2022-3094\nhttps://access.redhat.com/security/cve/CVE-2022-3358\nhttps://access.redhat.com/security/cve/CVE-2022-3515\nhttps://access.redhat.com/security/cve/CVE-2022-3517\nhttps://access.redhat.com/security/cve/CVE-2022-3715\nhttps://access.redhat.com/security/cve/CVE-2022-3736\nhttps://access.redhat.com/security/cve/CVE-2022-3821\nhttps://access.redhat.com/security/cve/CVE-2022-3924\nhttps://access.redhat.com/security/cve/CVE-2022-4415\nhttps://access.redhat.com/security/cve/CVE-2022-21824\nhttps://access.redhat.com/security/cve/CVE-2022-23540\nhttps://access.redhat.com/security/cve/CVE-2022-23541\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-26280\nhttps://access.redhat.com/security/cve/CVE-2022-27664\nhttps://access.redhat.com/security/cve/CVE-2022-28805\nhttps://access.redhat.com/security/cve/CVE-2022-29154\nhttps://access.redhat.com/security/cve/CVE-2022-30635\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/cve/CVE-2022-32189\nhttps://access.redhat.com/security/cve/CVE-2022-32190\nhttps://access.redhat.com/security/cve/CVE-2022-33099\nhttps://access.redhat.com/security/cve/CVE-2022-34903\nhttps://access.redhat.com/security/cve/CVE-2022-35737\nhttps://access.redhat.com/security/cve/CVE-2022-36227\nhttps://access.redhat.com/security/cve/CVE-2022-37434\nhttps://access.redhat.com/security/cve/CVE-2022-38149\nhttps://access.redhat.com/security/cve/CVE-2022-38900\nhttps://access.redhat.com/security/cve/CVE-2022-40023\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/cve/CVE-2022-40897\nhttps://access.redhat.com/security/cve/CVE-2022-41316\nhttps://access.redhat.com/security/cve/CVE-2022-41715\nhttps://access.redhat.com/security/cve/CVE-2022-41717\nhttps://access.redhat.com/security/cve/CVE-2022-41723\nhttps://access.redhat.com/security/cve/CVE-2022-41724\nhttps://access.redhat.com/security/cve/CVE-2022-41725\nhttps://access.redhat.com/security/cve/CVE-2022-42010\nhttps://access.redhat.com/security/cve/CVE-2022-42011\nhttps://access.redhat.com/security/cve/CVE-2022-42012\nhttps://access.redhat.com/security/cve/CVE-2022-42898\nhttps://access.redhat.com/security/cve/CVE-2022-42919\nhttps://access.redhat.com/security/cve/CVE-2022-43680\nhttps://access.redhat.com/security/cve/CVE-2022-45061\nhttps://access.redhat.com/security/cve/CVE-2022-45873\nhttps://access.redhat.com/security/cve/CVE-2022-46175\nhttps://access.redhat.com/security/cve/CVE-2022-47024\nhttps://access.redhat.com/security/cve/CVE-2022-47629\nhttps://access.redhat.com/security/cve/CVE-2022-48303\nhttps://access.redhat.com/security/cve/CVE-2022-48337\nhttps://access.redhat.com/security/cve/CVE-2022-48338\nhttps://access.redhat.com/security/cve/CVE-2022-48339\nhttps://access.redhat.com/security/cve/CVE-2023-0361\nhttps://access.redhat.com/security/cve/CVE-2023-0620\nhttps://access.redhat.com/security/cve/CVE-2023-0665\nhttps://access.redhat.com/security/cve/CVE-2023-2491\nhttps://access.redhat.com/security/cve/CVE-2023-22809\nhttps://access.redhat.com/security/cve/CVE-2023-24329\nhttps://access.redhat.com/security/cve/CVE-2023-24999\nhttps://access.redhat.com/security/cve/CVE-2023-25000\nhttps://access.redhat.com/security/cve/CVE-2023-25136\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBZJTCdtzjgjWX9erEAQg+Bw/8DMJst89ezTMnzgSKR5q+EzfkajgA1+hZ\npk9CcsCzrIISkbi+6uvkfRPe7hwHstigfswCsuh4d98lad20WKw9UUYMsFOQlGW5\nIzzxf5a1Uw/pdO/61f4k6Ze7E4gANneknQiiiUFpA4lF7RkuBoeWYoB12r+Y3O/t\nl8CGEVAk/DBn2WVc5PL7o7683A6tS8Z5FNpyPg2tvtpdYkr1cw2+L2mcBHpiAjUr\nS+Jaj5/qf8Z/TIZY7vvOqr6YCDrMnbZChbvYaPCwaRqbOb1RbGW++c9hEWKnaNbm\nXiIgTY4d75+y7afRFoc9INZ1SjvL7476LCABGXmEEocuwHRU7K4u4rGyOXzDz5xb\n3zgJO58oVr6RPHvpDsxoqOwEbhfdNpRpBcuuzAThe9w5Cnh45UnEU5sJKY/1U1qo\nUxBeMoFrrhUdrE4A1Gsr0GcImh6JDJXweIJe1C6FI9e3/J5HM7mR4Whznz+DslXL\nCNmmPWs5afjrrgVVaDuDYq3m7lwuCTODHRVSeWGrtyhnNc6RNtjJi9fumqavP07n\n8lc4v4c56lMVDpwQQkYMCJEzHrYDWeFDza9KdDbddvLtkoYXxJQiGwp0BZne1ArV\nlU3PstRRagnbV6yf/8LPSaSQZAVBnEe2YoF83gJbpFEhYimOCHS9BzC0qce7lypR\nvhbUlNurVkU=\n=4jwh\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node\n\n6. Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nSecurity fixes:\n\n* CVE-2022-27664 golang: net/http: handle server errors after sending\nGOAWAY\n* CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward\nunparseable query parameters\n* CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing\nregexps\n* CVE-2022-41717 golang: net/http: An attacker can cause excessive memory\ngrowth in a Go server accepting HTTP/2 requests\n\nBugs addressed:\n\n* subctl diagnose firewall metrics does not work on merged kubeconfig (BZ#\n2013711)\n* [Submariner] - Fails to increase gateway amount after deployment (BZ#\n2097381)\n* Submariner gateway node does not get deleted with subctl cloud cleanup\ncommand (BZ# 2108634)\n* submariner GW pods are unable to resolve the DNS of the Broker K8s API\nURL (BZ# 2119362)\n* Submariner gateway node does not get deployed after applying\nManagedClusterAddOn on Openstack (BZ# 2124219)\n* unable to run subctl benchmark latency, pods fail with ImagePullBackOff\n(BZ# 2130326)\n* [IBM Z] - Submariner addon unistallation doesnt work from ACM console\n(BZ# 2136442)\n* Tags on AWS security group for gateway node break cloud-controller\nLoadBalancer (BZ# 2139477)\n* RHACM - Submariner: UI support for OpenStack #19297 (ACM-1242)\n* Submariner OVN support (ACM-1358)\n* Submariner Azure Console support (ACM-1388)\n* ManagedClusterSet consumers migrate to v1beta2 (ACM-1614)\n* Submariner on disconnected ACM #22000 (ACM-1678)\n* Submariner gateway: Error creating AWS security group if already exists\n(ACM-2055)\n* Submariner gateway security group in AWS not deleted when uninstalling\nsubmariner (ACM-2057)\n* The submariner-metrics-proxy pod pulls an image with wrong naming\nconvention (ACM-2058)\n* The submariner-metrics-proxy pod is not part of the Agent readiness check\n(ACM-2067)\n* Subctl 0.14.0 prints version \"vsubctl\" (ACM-2132)\n* managedclusters \"local-cluster\" not found and missing Submariner Broker\nCRD (ACM-2145)\n* Add support of ARO to Submariner deployment (ACM-2150)\n* The e2e tests execution fails for \"Basic TCP connectivity\" tests\n(ACM-2204)\n* Gateway error shown \"diagnose all\" tests (ACM-2206)\n* Submariner does not support cluster \"kube-proxy ipvs mode\"(ACM-2211)\n* Vsphere cluster shows Pod Security admission controller warnings\n(ACM-2256)\n* Cannot use submariner with OSP and self signed certs (ACM-2274)\n* Subctl diagnose tests spawn nettest image with wrong tag nameing\nconvention (ACM-2387)\n* Subctl 0.14.1 prints version \"devel\" (ACM-2482)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2013711 - subctl diagnose firewall metrics does not work on merged kubeconfig\n2097381 - [Submariner] - Fails to increase gateway amount after deployment\n2108634 - Submariner gateway node does not get deleted with subctl cloud cleanup command\n2119362 - submariner GW pods are unable to resolve the DNS of the Broker K8s API URL\n2124219 - Submariner gateway node does not get deployed after applying ManagedClusterAddOn on Openstack\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2130326 - unable to run subctl benchmark latency, pods fail with ImagePullBackOff\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2136442 - [IBM Z] - Submariner addon unistallation doesnt work from ACM console\n2139477 - Tags on AWS security group for gateway node break cloud-controller LoadBalancer\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nACM-1614 - ManagedClusterSet consumers migrate to v1beta2 (Submariner)\nACM-2055 - Submariner gateway: Error creating AWS security group if already exists\nACM-2057 - [Submariner] - submariner gateway security group in aws not deleted when uninstalling submariner\nACM-2058 - [Submariner] - The submariner-metrics-proxy pod pulls an image with wrong naming convention\nACM-2067 - [Submariner] - The submariner-metrics-proxy pod is not part of the Agent readiness check\nACM-2132 - Subctl 0.14.0 prints version \"vsubctl\"\nACM-2145 - managedclusters \"local-cluster\" not found and missing Submariner Broker CRD\nACM-2150 - Add support of ARO to Submariner deployment\nACM-2204 - [Submariner] - e2e tests execution fails for \"Basic TCP connectivity\" tests\nACM-2206 - [Submariner] - Gateway error shown \"diagnose all\" tests\nACM-2211 - [Submariner] - Submariner does not support cluster \"kube-proxy ipvs mode\"\nACM-2256 - [Submariner] - Vsphere cluster shows Pod Security admission controller warnings\nACM-2274 - Cannot use submariner with OSP and self signed certs\nACM-2387 - [Submariner] - subctl diagnose tests spawn nettest image with wrong tag nameing convention\nACM-2482 - Subctl 0.14.1 prints version \"devel\"\n\n6. This advisory contains the following\nOpenShift Virtualization 4.12.0 images:\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* golang: net/http: improper sanitization of Transfer-Encoding header\n(CVE-2022-1705)\n\n* golang: go/parser: stack exhaustion in all Parse* functions\n(CVE-2022-1962)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)\n\n* golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/http/httputil: NewSingleHostReverseProxy - omit\nX-Forwarded-For not working (CVE-2022-32148)\n\n* golang: crypto/tls: session tickets lack random ticket_age_add\n(CVE-2022-30629)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nRHEL-8-CNV-4.12\n\n=============\nbridge-marker-container-v4.12.0-24\ncluster-network-addons-operator-container-v4.12.0-24\ncnv-containernetworking-plugins-container-v4.12.0-24\ncnv-must-gather-container-v4.12.0-58\nhco-bundle-registry-container-v4.12.0-769\nhostpath-csi-driver-container-v4.12.0-30\nhostpath-provisioner-container-v4.12.0-30\nhostpath-provisioner-operator-container-v4.12.0-31\nhyperconverged-cluster-operator-container-v4.12.0-96\nhyperconverged-cluster-webhook-container-v4.12.0-96\nkubemacpool-container-v4.12.0-24\nkubevirt-console-plugin-container-v4.12.0-182\nkubevirt-ssp-operator-container-v4.12.0-64\nkubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55\nkubevirt-tekton-tasks-copy-template-container-v4.12.0-55\nkubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55\nkubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55\nkubevirt-tekton-tasks-operator-container-v4.12.0-40\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55\nkubevirt-template-validator-container-v4.12.0-32\nlibguestfs-tools-container-v4.12.0-255\novs-cni-marker-container-v4.12.0-24\novs-cni-plugin-container-v4.12.0-24\nvirt-api-container-v4.12.0-255\nvirt-artifacts-server-container-v4.12.0-255\nvirt-cdi-apiserver-container-v4.12.0-72\nvirt-cdi-cloner-container-v4.12.0-72\nvirt-cdi-controller-container-v4.12.0-72\nvirt-cdi-importer-container-v4.12.0-72\nvirt-cdi-operator-container-v4.12.0-72\nvirt-cdi-uploadproxy-container-v4.12.0-71\nvirt-cdi-uploadserver-container-v4.12.0-72\nvirt-controller-container-v4.12.0-255\nvirt-exportproxy-container-v4.12.0-255\nvirt-exportserver-container-v4.12.0-255\nvirt-handler-container-v4.12.0-255\nvirt-launcher-container-v4.12.0-255\nvirt-operator-container-v4.12.0-255\nvirtio-win-container-v4.12.0-10\nvm-network-latency-checkup-container-v4.12.0-89\n\n3. Solution:\n\nBefore applying this update, you must apply all previously released errata\nrelevant to your system. \n\nTo apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1719190 - Unable to cancel live-migration if virt-launcher pod in pending state\n2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040377 - Unable to delete failed VMIM after VM deleted\n2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed\n2052556 - Metric \"kubevirt_num_virt_handlers_by_node_running_virt_launcher\" reporting incorrect value\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2060499 - [RFE] Cannot add additional service (or other objects) to VM template\n2069098 - Large scale |VMs migration is slow due to low migration parallelism\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2071491 - Storage Throughput metrics are incorrect in Overview\n2072797 - Metrics in Virtualization -\u003e Overview period is not clear or configurable\n2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers\n2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode\n2086551 - Min CPU feature found in labels\n2087724 - Default template show no boot source even there are auto-upload boot sources\n2088129 - [SSP] webhook does not comply with restricted security context\n2088464 - [CDI] cdi-deployment does not comply with restricted security context\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2089744 - HCO should label its control plane namespace to admit pods at privileged security level\n2089751 - 4.12.0 containers\n2089804 - 4.12.0 rpms\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer\n2093771 - The disk source should be PVC if the template has no auto-update boot source\n2093996 - kubectl get vmi API should always return primary interface if exist\n2094202 - Cloud-init username field should have hint\n2096285 - KubeVirt CR API documentation is missing docs for many fields\n2096780 - [RFE] Add ssh-key and sysprep to template scripts tab\n2097436 - Online disk expansion ignores filesystem overhead change\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2099556 - [RFE] Add option to enable RDP service for windows vm\n2099573 - [RFE] Improve template\u0027s message about not editable\n2099923 - [RFE] Merge \"SSH access\" and \"SSH command\" into one\n2100290 - Error is not dismissed on catalog review page\n2100436 - VM list filtering ignores VMs in error-states\n2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2100629 - Update nested support KBASE article\n2100679 - The number of hardware devices is not correct in vm overview tab\n2100682 - All hardware devices get deleted while just delete one\n2100684 - Workload profile are not editable during creation and after creation\n2101144 - VM filter has two \"Other\" checkboxes which are triggered together\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101167 - Edit buttons clickable area is too large. \n2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state\n2101390 - Easy to miss the \"tick\" when adding GPU device to vm via UI\n2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2101423 - wrong user name on using ignition\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101445 - \"Pending changes - Boot Order\"\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101499 - Cannot add NIC to VM template as non-priv user\n2101501 - NAME parameter in VM template has no effect. \n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101667 - VMI view is not aligned with vm and tempates\n2101681 - All templates are labeling \"source available\" in template list page\n2102074 - VM Creation time on VM Overview Details card lacks string\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102132 - align the utilization card of single VM overview with the design\n2102138 - Should the word \"new\" be removed from \"Create new VirtualMachine from catalog\"?\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102475 - Template \u0027vm-template-example\u0027 should be filtered by \u0027Fedora\u0027 rather than \u0027Other\u0027\n2102561 - sysprep-info should link to downstream doc\n2102737 - Clone a VM should lead to vm overview tab\n2102740 - \"Save\" button on vm clone modal should be \"Clone\"\n2103806 - \"404: Not Found\" appears shortly by clicking the PVC link on vm disk tab\n2103807 - PVC is not named by VM name while creating vm quickly\n2103817 - Workload profile values in vm details should align with template\u0027s value\n2103844 - VM nic model is empty\n2104331 - VM list page scroll up automatically\n2104402 - VM create button is not enabled while adding multiple environment disks\n2104422 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2104424 - Enable descheduler or hide it on template\u0027s scheduling tab\n2104479 - [4.12] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2104480 - Alerts in VM overview tab disappeared after a few seconds\n2104785 - \"Add disk\" and \"Disks\" are on the same line\n2104859 - [RFE] Add \"Copy SSH command\" to VM action list\n2105257 - Can\u0027t set log verbosity level for virt-operator pod\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106963 - Cannot add configmap for windows VM\n2107279 - VM Template\u0027s bootable disk can be marked as bootable\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108339 - datasource does not provide timestamp when updated\n2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2109818 - Upstream metrics documentation is not detailed enough\n2109975 - DataVolume fails to import \"cirros-container-disk-demo\" image\n2110256 - Storage -\u003e PVC -\u003e upload data, does not support source reference\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2111240 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111328 - kubevirt plugin console crashed after visit vmi page\n2111378 - VM SSH command generated by UI points at api VIP\n2111744 - Cloned template should not label `app.kubernetes.io/name: common-templates`\n2111794 - the virtlogd process is taking too much RAM! (17468Ki \u003e 17Mi)\n2112900 - button style are different\n2114516 - Nothing happens after clicking on Fedora cloud image list link\n2114636 - The style of displayed items are not unified on VM tabs\n2114683 - VM overview tab is crashed just after the vm is created\n2115257 - Need to Change system-product-name to \"OpenShift Virtualization\" in CNV-4.12\n2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items\n2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates\n2116225 - The filter keyword of the related operator \u0027Openshift Data Foundation\u0027 is \u0027OCS\u0027 rather than \u0027ODF\u0027\n2116644 - Importer pod is failing to start with error \"MountVolume.SetUp failed for volume \"cdi-proxy-cert-vol\" : configmap \"custom-ca\" not found\"\n2117549 - Cannot edit cloud-init data after add ssh key\n2117803 - Cannot edit ssh even vm is stopped\n2117813 - Improve descriptive text of VM details while VM is off\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n2118257 - outdated doc link tolerations modal\n2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format\n2119069 - Unable to start windows VMs on PSI setups\n2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2119309 - readinessProbe in VM stays on failed\n2119615 - Change the disk size causes the unit changed\n2120907 - Cannot filter disks by label\n2121320 - Negative values in migration metrics\n2122236 - Failing to delete HCO with SSP sticking around\n2122990 - VMExport should check APIGroup\n2124147 - \"ReadOnlyMany\" should not be added to supported values in memory dump\n2124307 - Ui crash/stuck on loading when trying to detach disk on a VM\n2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it\n2124555 - View documentation link on MigrationPolicies page des not work\n2124557 - MigrationPolicy description is not displayed on Details page\n2124558 - Non-privileged user can start MigrationPolicy creation\n2124565 - Deleted DataSource reappears in list\n2124572 - First annotation can not be added to DataSource\n2124582 - Filtering VMs by OS does not work\n2124594 - Docker URL validation is inconsistent over application\n2124597 - Wrong case in Create DataSource menu\n2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile\n2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state\n2127787 - Expose the PVC source of the dataSource on UI\n2127843 - UI crashed by selecting \"Live migration network\"\n2127931 - Change default time range on Virtualization -\u003e Overview -\u003e Monitoring dashboard to 30 minutes\n2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer\n2128002 - Error after VM template deletion\n2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128948 - Cannot create DataSource from default YAML\n2128949 - Cannot create MigrationPolicy from example YAML\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129234 - Service is not deleted along with the VM when the VM is created from a template with service\n2129301 - Cloud-init network data don\u0027t wipe out on uncheck checkbox \u0027Add network data\u0027\n2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook\n2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV\n2130588 - crypto-policy : Common Ciphers support by apiserver and hco\n2130695 - crypto-policy : Logging Improvement and publish the source of ciphers\n2130909 - Non-privileged user can start DataSource creation\n2131157 - KV data transfer rate chart in VM Metrics tab is not displayed\n2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough\n2131674 - Bump virtlogd memory requirement to 20Mi\n2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11\n2132682 - Default YAML entity name convention. \n2132721 - Delete dialogs\n2132744 - Description text is missing in Live Migrations section\n2132746 - Background is broken in Virtualization Monitoring page\n2132783 - VM can not be created from Template with edited boot source\n2132793 - Edited Template BSR is not saved\n2132932 - Typo in PVC size units menu\n2133540 - [pod security violation audit] Audit violation in \"cni-plugins\" container should be fixed\n2133541 - [pod security violation audit] Audit violation in \"bridge-marker\" container should be fixed\n2133542 - [pod security violation audit] Audit violation in \"manager\" container should be fixed\n2133543 - [pod security violation audit] Audit violation in \"kube-rbac-proxy\" container should be fixed\n2133655 - [pod security violation audit] Audit violation in \"cdi-operator\" container should be fixed\n2133656 - [4.12][pod security violation audit] Audit violation in \"hostpath-provisioner-operator\" container should be fixed\n2133659 - [pod security violation audit] Audit violation in \"cdi-controller\" container should be fixed\n2133660 - [pod security violation audit] Audit violation in \"cdi-source-update-poller\" container should be fixed\n2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod\n2134672 - [e2e] add data-test-id for catalog -\u003e storage section\n2134825 - Authorization for expand-spec endpoint missing\n2135805 - Windows 2022 template is missing vTPM and UEFI params in spec\n2136051 - Name jumping when trying to create a VM with source from catalog\n2136425 - Windows 11 is detected as Windows 10\n2136534 - Not possible to specify a TTL on VMExports\n2137123 - VMExport: export pod is not PSA complaint\n2137241 - Checkbox about delete vm disks is not loaded while deleting VM\n2137243 - registery input add docker prefix twice\n2137349 - \"Manage source\" action infinitely loading on DataImportCron details page\n2137591 - Inconsistent dialog headings/titles\n2137731 - Link of VM status in overview is not working\n2137733 - No link for VMs in error status in \"VirtualMachine statuses\" card\n2137736 - The column name \"MigrationPolicy name\" can just be \"Name\"\n2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly\n2138112 - Unsupported S3 endpoint option in Add disk modal\n2138119 - \"Customize VirtualMachine\" flow is not user-friendly because settings are split into 2 modals\n2138199 - Win11 and Win22 templates are not filtered properly by Template provider\n2138653 - Saving Template prameters reloads the page\n2138657 - Setting DATA_SOURCE_* Template parameters makes VM creation fail\n2138664 - VM that was created with SSH key fails to start\n2139257 - Cannot add disk via \"Using an existing PVC\"\n2139260 - Clone button is disabled while VM is running\n2139293 - Non-admin user cannot load VM list page\n2139296 - Non-admin cannot load MigrationPolicies page\n2139299 - No auto-generated VM name while creating VM by non-admin user\n2139306 - Non-admin cannot create VM via customize mode\n2139479 - virtualization overview crashes for non-priv user\n2139574 - VM name gets \"emptyname\" if click the create button quickly\n2139651 - non-priv user can click create when have no permissions\n2139687 - catalog shows template list for non-priv users\n2139738 - [4.12]Can\u0027t restore cloned VM\n2139820 - non-priv user cant reach vm details\n2140117 - Provide upgrade path from 4.11.1-\u003e4.12.0\n2140521 - Click the breadcrumb list about \"VirtualMachines\" goes to undefined project\n2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user\n2140627 - Not able to select storageClass if there is no default storageclass defined\n2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user\n2140808 - Hyperv feature set to \"enabled: false\" prevents scheduling\n2140977 - Alerts number is not correct on Virtualization overview\n2140982 - The base template of cloned template is \"Not available\"\n2140998 - Incorrect information shows in overview page per namespace\n2141089 - Unable to upload boot images. \n2141302 - Unhealthy states alerts and state metrics are missing\n2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations\n2141494 - \"Start in pause mode\" option is not available while creating the VM\n2141654 - warning log appearing on VMs: found no SR-IOV networks\n2141711 - Node column selector is redundant for non-priv user\n2142468 - VM action \"Stop\" should not be disabled when VM in pause state\n2142470 - Delete a VM or template from all projects leads to 404 error\n2142511 - Enhance alerts card in overview\n2142647 - Error after MigrationPolicy deletion\n2142891 - VM latency checkup: Failed to create the checkup\u0027s Job\n2142929 - Permission denied when try get instancestypes\n2143268 - Topolvm storageProfile missing accessModes and volumeMode\n2143498 - Could not load template while creating VM from catalog\n2143964 - Could not load template while creating VM from catalog\n2144580 - \"?\" icon is too big in VM Template Disk tab\n2144828 - \"?\" icon is too big in VM Template Disk tab\n2144839 - Alerts number is not correct on Virtualization overview\n2153849 - After upgrade to 4.11.1-\u003e4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten\n2155757 - Incorrect upstream-version label \"v1.6.0-unstable-410-g09ea881c\" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container\n\n5. Description:\n\nThe rh-sso-7/sso76-openshift-rhel8 container image and\nrh-sso-7/sso7-rhel8-operator operator has been updated for RHEL-8 based\nMiddleware Containers to address the following security issues. Users of these images\nare also encouraged to rebuild all container images that depend on these\nimages. \n\nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2138971 - CVE-2022-3782 keycloak: path traversal via double URL encoding\n2141404 - CVE-2022-3916 keycloak: Session takeover with OIDC offline refreshtokens\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nCIAM-4412 - Build new OCP image for rh-sso-7/sso76-openshift-rhel8\nCIAM-4413 - Generate new operator bundle image for this patch\n\n6. Summary:\n\nAn update is now available for Migration Toolkit for Runtimes (v1.0.1). Bugs fixed (https://bugzilla.redhat.com/):\n\n2142707 - CVE-2022-42920 Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2148199 - CVE-2022-39278 Istio: Denial of service attack via a specially crafted message\n2148661 - CVE-2022-3962 kiali: error message spoofing in kiali UI\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOSSM-1977 - Support for Istio Gateway API in Kiali\nOSSM-2083 - Update maistra/istio 2.3 to Istio 1.14.5\nOSSM-2147 - Unexpected validation message on Gateway object\nOSSM-2169 - Member controller doesn\u0027t retry on conflict\nOSSM-2170 - Member namespaces aren\u0027t cleaned up when a cluster-scoped SMMR is deleted\nOSSM-2179 - Wasm plugins only support OCI images with 1 layer\nOSSM-2184 - Istiod isn\u0027t allowed to delete analysis distribution report configmap\nOSSM-2188 - Member namespaces not cleaned up when SMCP is deleted\nOSSM-2189 - If multiple SMCPs exist in a namespace, the controller reconciles them all\nOSSM-2190 - The memberroll controller reconciles SMMRs with invalid name\nOSSM-2232 - The member controller reconciles ServiceMeshMember with invalid name\nOSSM-2241 - Remove v2.0 from Create ServiceMeshControlPlane Form\nOSSM-2251 - CVE-2022-3962 openshift-istio-kiali-container: kiali: content spoofing [ossm-2.3]\nOSSM-2308 - add root CA certificates to kiali container\nOSSM-2315 - be able to customize openshift auth timeouts\nOSSM-2324 - Gateway injection does not work when pods are created by cluster admins\nOSSM-2335 - Potential hang using Traces scatterplot chart\nOSSM-2338 - Federation deployment does not need router mode sni-dnat\nOSSM-2344 - Restarting istiod causes Kiali to flood CRI-O with port-forward requests\nOSSM-2375 - Istiod should log member namespaces on every update\nOSSM-2376 - ServiceMesh federation stops working after the restart of istiod pod\nOSSM-535 - Support validationMessages in SMCP\nOSSM-827 - ServiceMeshMembers point to wrong SMCP name\n\n6. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.6.3 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):\n\n2129679 - clusters belong to global clusterset is not selected by placement when rescheduling\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2139085 - RHACM 2.6.3 images\n2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements\n\n5. \n\nThe following advisory data is extracted from:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0254.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment. \n\n\n\n\nDescription:\n\nThe rsync utility enables the users to copy and synchronize files locally or across a network. Synchronization with rsync is fast because rsync only sends the differences in files over the network instead of sending whole files. The rsync utility is also used as a mirroring tool",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
},
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "PACKETSTORM",
"id": "173107"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "170898"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "170759"
},
{
"db": "PACKETSTORM",
"id": "170806"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "176559"
}
],
"trust": 1.98
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-428208",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-37434",
"trust": 2.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/08/05/2",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/08/09/1",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "169707",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170027",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169503",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171271",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169726",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169624",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168107",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169566",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169906",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169783",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169557",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168113",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169577",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168765",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169595",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-428208",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "173605",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "173107",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170083",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170179",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170898",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170741",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170210",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170759",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170806",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170242",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "176559",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "PACKETSTORM",
"id": "173107"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "170898"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "170759"
},
{
"db": "PACKETSTORM",
"id": "170806"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "176559"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"id": "VAR-202208-0404",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:15:51.322000Z",
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/37"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/38"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/42"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5218"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/pavpqncg3xrlclnsqrm3kan5zfmvxvty/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nmboj77a7t7pqcarmduk75te6llesz3o/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/yrqai7h4m4rqz2iwzueexecbe5d56bh2/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/x5u7otkzshy2i3zfjsr2shfhw72rkgdk/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jwn4ve3jqr4o2sous5txnlanrpmhwv4i/"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/09/msg00012.html"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2022/08/05/2"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2022/08/09/1"
},
{
"trust": 1.1,
"url": "https://github.com/curl/curl/issues/9271"
},
{
"trust": 1.1,
"url": "https://github.com/ivd38/zlib_overflow"
},
{
"trust": 1.1,
"url": "https://github.com/madler/zlib/blob/21767c654d31d2dccdde4330529775c6c5fd5389/zlib.h#l1062-l1063"
},
{
"trust": 1.1,
"url": "https://github.com/madler/zlib/commit/eff308af425b67093bab25f80f1ae950166bece1"
},
{
"trust": 1.1,
"url": "https://github.com/nodejs/node/blob/75b68c6e4db515f76df73af476eccf382bbcb00a/deps/zlib/inflate.c#l762-l764"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220901-0005/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213489"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213490"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213491"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213493"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213494"
},
{
"trust": 1.0,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 1.0,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 1.0,
"url": "https://security.netapp.com/advisory/ntap-20230427-0007/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42012"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42010"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42011"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-35737"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2880"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-43680"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-27664"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0924"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0908"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0562"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22844"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0865"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0909"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0561"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0891"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1355"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38178"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-24329"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3517"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4238"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3821"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32189"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-41717"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24448"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2639"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1055"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26373"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-20368"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1048"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0854"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29581"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1016"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2078"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2938"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21499"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-36946"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36558"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1852"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0168"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28390"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27950"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2586"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23960"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3640"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-30002"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1184"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25255"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36516"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28893"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0891"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0908"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0909"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0215"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1281"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.1,
"url": "https://registry.centos.org/v2/\":"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4053"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://issues.redhat.com/):"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-32233"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2023:4052"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23540"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41316"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2795"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-36227"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-45873"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-2491"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43998"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41724"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44531"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41725"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38149"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28805"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25136"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26280"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48337"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1587"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4415"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-45061"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28861"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0620"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:3742"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-24999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25000"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-22809"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40023"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-47024"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16251"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28861"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44533"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-46175"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44532"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44964"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3736"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17049"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3715"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24903"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43998"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38900"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1348"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42919"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16251"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-33099"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48339"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-46828"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41723"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-17049"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4231"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3094"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8889"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39399"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/submariner#deploying-submariner-console"
},
{
"trust": 0.1,
"url": "https://submariner.io/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41974"
},
{
"trust": 0.1,
"url": "https://submariner.io/getting-started/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0631"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1798"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/registry/registry.access.redhat.com/repository/rh-sso-7/sso76-openshift-rhel8"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8964"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1471"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42920"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0470"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1355"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1471"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39278"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21713"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0542"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21713"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23648"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21703"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21703"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21702"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3962"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21702"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41912"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:9040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0254.json"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2116639"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2024:0254"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-37434"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "PACKETSTORM",
"id": "173107"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "170898"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "170759"
},
{
"db": "PACKETSTORM",
"id": "170806"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "176559"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "PACKETSTORM",
"id": "173107"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "170898"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "170759"
},
{
"db": "PACKETSTORM",
"id": "170806"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "176559"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-08-05T00:00:00",
"db": "VULHUB",
"id": "VHN-428208"
},
{
"date": "2023-07-19T15:37:11",
"db": "PACKETSTORM",
"id": "173605"
},
{
"date": "2023-06-23T14:56:34",
"db": "PACKETSTORM",
"id": "173107"
},
{
"date": "2022-12-02T15:57:08",
"db": "PACKETSTORM",
"id": "170083"
},
{
"date": "2022-12-09T14:52:40",
"db": "PACKETSTORM",
"id": "170179"
},
{
"date": "2023-02-08T16:00:47",
"db": "PACKETSTORM",
"id": "170898"
},
{
"date": "2023-01-26T15:29:09",
"db": "PACKETSTORM",
"id": "170741"
},
{
"date": "2022-12-13T17:16:20",
"db": "PACKETSTORM",
"id": "170210"
},
{
"date": "2023-01-27T15:03:38",
"db": "PACKETSTORM",
"id": "170759"
},
{
"date": "2023-01-31T17:11:04",
"db": "PACKETSTORM",
"id": "170806"
},
{
"date": "2022-12-15T15:34:35",
"db": "PACKETSTORM",
"id": "170242"
},
{
"date": "2024-01-16T13:46:07",
"db": "PACKETSTORM",
"id": "176559"
},
{
"date": "2022-08-05T07:15:07.240000",
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-01-09T00:00:00",
"db": "VULHUB",
"id": "VHN-428208"
},
{
"date": "2023-07-19T00:56:46.373000",
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "173107"
}
],
"trust": 0.1
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2023-4053-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "173605"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "173605"
}
],
"trust": 0.1
}
}
VAR-202108-2221
Vulnerability from variot - Updated: 2024-07-23 21:13curl supports the -t command line option, known as CURLOPT_TELNETOPTIONSin libcurl. This rarely used option is used to send variable=content pairs toTELNET servers.Due to flaw in the option parser for sending NEW_ENV variables, libcurlcould be made to pass on uninitialized data from a stack based buffer to theserver. Therefore potentially revealing sensitive internal information to theserver using a clear-text network protocol.This could happen because curl did not call and use sscanf() correctly whenparsing the string provided by the application. A has been found in curl. The fix for CVE-2021-22898 doesn't remedy the vulnerability. The highest threat from this vulnerability is to confidentiality. Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters.
This advisory contains the container images for Gatekeeper that include security updates, and container upgrades. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security updates:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
installPlanApprovalset toAutomatic. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApprovaltomanual, then you must view each cluster to manually approve the upgrade to the operator.
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
-
- Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'. Bugs fixed (https://bugzilla.redhat.com/):
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
- Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: ACS 3.67 security and enhancement update Advisory ID: RHSA-2021:4902-01 Product: RHACS Advisory URL: https://access.redhat.com/errata/RHSA-2021:4902 Issue date: 2021-12-01 CVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 =====================================================================
- Summary:
Updated images are now available for Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:
OpenShift Dedicated support
RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform.
-
Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS.
-
Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds.
-
Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.
Security Fix(es):
-
civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API (CVE-2020-27304)
-
nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
-
nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)
-
golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet (CVE-2021-29923)
-
helm: information disclosure vulnerability (CVE-2021-32690)
-
golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fixes The release of RHACS 3.67 includes the following bug fixes:
-
Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed.
-
Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.
System changes The release of RHACS 3.67 includes the following system changes:
- Scanner now identifies vulnerabilities in Ubuntu 21.10 images.
- The Port exposure method policy criteria now include route as an exposure method.
- The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation.
- The OpenShift Compliance Operator integration now supports using TailoredProfiles.
- The RHACS Jenkins plugin now provides additional security information.
- When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values.
- The default uid:gid pair for the Scanner image is now 65534:65534.
- RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes.
- If microdnf is part of an image or shows up in process execution, RHACS reports it as a security violation for the Red Hat Package Manager in Image or the Red Hat Package Manager Execution security policies.
- In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode.
- You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
-
You can now use a regular expression for the deployment name while specifying policy exclusions
-
Solution:
To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67.
- Bugs fixed (https://bugzilla.redhat.com/):
1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API
- JIRA issues fixed (https://issues.jboss.org/):
RHACS-65 - Release RHACS 3.67.0
- References:
https://access.redhat.com/security/cve/CVE-2018-20673 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-27304 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3801 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-29923 https://access.redhat.com/security/cve/CVE-2021-32690 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-39293 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr Kjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w tKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e lq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV x4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2 e8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK qnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz vguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt G4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT PTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/ pJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN T0pPNmsPGZY= =ux5P -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Summary:
An update is now available for OpenShift Logging 5.2. Description:
Openshift Logging Bug Fix Release (5.2.3)
Security Fix(es):
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
For Red Hat OpenShift Logging 5.2, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1857 - OpenShift Alerting Rules Style-Guide Compliance LOG-1904 - [release-5.2] Fix the Display of ClusterLogging type in OLM LOG-1916 - [release-5.2] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
6
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202108-2221",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.7"
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.1"
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.2.1"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.59"
},
{
"model": "sinec infrastructure network services",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0.1.1"
},
{
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.5"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.0"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.4"
},
{
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.78.0"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.3"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "sinema remote connect server",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.1"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.2"
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.0.1"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.1.0"
},
{
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.35"
},
{
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.3.1"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.78.0",
"versionStartIncluding": "7.7",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-004:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.0.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.3.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:11.5:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.26",
"versionStartIncluding": "8.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "5.7.35",
"versionStartIncluding": "5.7.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0.1.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinema_remote_connect_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.1",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165002"
}
],
"trust": 0.7
},
"cve": "CVE-2021-22925",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-381399",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.3,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 3.9,
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-22925",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202107-1582",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-381399",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "curl supports the `-t` command line option, known as `CURLOPT_TELNETOPTIONS`in libcurl. This rarely used option is used to send variable=content pairs toTELNET servers.Due to flaw in the option parser for sending `NEW_ENV` variables, libcurlcould be made to pass on uninitialized data from a stack based buffer to theserver. Therefore potentially revealing sensitive internal information to theserver using a clear-text network protocol.This could happen because curl did not call and use sscanf() correctly whenparsing the string provided by the application. A has been found in curl. The fix for CVE-2021-22898 doesn\u0027t remedy the vulnerability. The highest threat from this vulnerability is to confidentiality. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. \n\nThis advisory contains the container images for Gatekeeper that include\nsecurity updates, and container upgrades. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity updates:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n- - Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n- - Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: ACS 3.67 security and enhancement update\nAdvisory ID: RHSA-2021:4902-01\nProduct: RHACS\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4902\nIssue date: 2021-12-01\nCVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 \n CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 \n CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 \n CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 \n CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 \n CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 \n CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 \n CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 \n CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 \n CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 \n CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 \n CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 \n CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 \n CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 \n=====================================================================\n\n1. Summary:\n\nUpdated images are now available for Red Hat Advanced Cluster Security for\nKubernetes (RHACS). \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. \n\n1. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. \n\n2. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. \n\n3. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nSecurity Fix(es):\n\n* civetweb: directory traversal when using the built-in example HTTP\nform-based file upload mechanism via the mg_handle_form_request API\n(CVE-2020-27304)\n\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n\n* nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)\n\n* golang: net: incorrect parsing of extraneous zero characters at the\nbeginning of an IP address octet (CVE-2021-29923)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. \n\n2. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. \n2. The Port exposure method policy criteria now include route as an\nexposure method. \n3. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. \n4. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. \n5. The RHACS Jenkins plugin now provides additional security information. \n6. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. \n7. The default uid:gid pair for the Scanner image is now 65534:65534. \n8. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. \n9. If microdnf is part of an image or shows up in process execution, RHACS\nreports it as a security violation for the Red Hat Package Manager in Image\nor the Red Hat Package Manager Execution security policies. \n10. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. \n11. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20673\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-27304\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3801\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-29923\nhttps://access.redhat.com/security/cve/CVE-2021-32690\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-39293\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr\nKjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w\ntKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e\nlq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV\nx4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2\ne8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK\nqnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz\nvguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt\nG4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT\nPTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/\npJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN\nT0pPNmsPGZY=\n=ux5P\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Summary:\n\nAn update is now available for OpenShift Logging 5.2. Description:\n\nOpenshift Logging Bug Fix Release (5.2.3)\n\nSecurity Fix(es):\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nFor Red Hat OpenShift Logging 5.2, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1857 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1904 - [release-5.2] Fix the Display of ClusterLogging type in OLM\nLOG-1916 - [release-5.2] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\n\n6",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22925"
},
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "VULMON",
"id": "CVE-2021-22925"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165002"
}
],
"trust": 1.71
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-22925",
"trust": 2.5
},
{
"db": "SIEMENS",
"id": "SSA-389290",
"trust": 1.7
},
{
"db": "SIEMENS",
"id": "SSA-484086",
"trust": 1.7
},
{
"db": "HACKERONE",
"id": "1223882",
"trust": 1.7
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166489",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165002",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165129",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165096",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165862",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166051",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166308",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165633",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164886",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165758",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164249",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "163637",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166789",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3935",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4229",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1071",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0716",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2473",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3905",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0245",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4095",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4059",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4254",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4019",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3748",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0493",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1837",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2526",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0394",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3101.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1677",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3146",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021111131",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021072212",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021080210",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021072814",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031104",
"trust": 0.6
},
{
"db": "ICS CERT",
"id": "ICSA-22-167-17",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166309",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-381399",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22925",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165286",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165287",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "VULMON",
"id": "CVE-2021-22925"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"id": "VAR-202108-2221",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T21:13:18.214000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Arch Linux Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=158024"
},
{
"title": "Red Hat: CVE-2021-22925",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-22925"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-22925 log"
},
{
"title": "Arch Linux Advisories: [ASA-202107-61] libcurl-compat: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-61"
},
{
"title": "Arch Linux Advisories: [ASA-202107-60] lib32-curl: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-60"
},
{
"title": "Arch Linux Advisories: [ASA-202107-64] lib32-libcurl-gnutls: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-64"
},
{
"title": "Arch Linux Advisories: [ASA-202107-62] lib32-libcurl-compat: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-62"
},
{
"title": "Arch Linux Advisories: [ASA-202107-63] libcurl-gnutls: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-63"
},
{
"title": "Arch Linux Advisories: [ASA-202107-59] curl: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-59"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22925"
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-908",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.7,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
},
{
"trust": 1.7,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-484086.pdf"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20210902-0003/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht212804"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht212805"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2021/sep/39"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2021/sep/40"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.7,
"url": "https://hackerone.com/reports/1223882"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.4,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/frucw2uvnyudzf72dqlfqr4pjec6cf7v/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/frucw2uvnyudzf72dqlfqr4pjec6cf7v/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-167-17"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164886/red-hat-security-advisory-2021-4511-03.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021111131"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-information-disclosure-via-telnet-stack-contents-35956"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170303/gentoo-linux-security-advisory-202212-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1071"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4019"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3748"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3146"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165862/red-hat-security-advisory-2022-0434-05.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164249/apple-security-advisory-2021-09-20-8.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072814"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165633/ubuntu-security-notice-usn-5021-2.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021080210"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0716"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165135/red-hat-security-advisory-2021-4914-06.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165129/red-hat-security-advisory-2021-4902-06.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165209/red-hat-security-advisory-2021-5038-04.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3101.2"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht212805"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166489/red-hat-security-advisory-2022-1081-01.html"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht212804"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165096/red-hat-security-advisory-2021-4845-05.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0394"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0493"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2526"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3935"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072212"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6495407"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165002/red-hat-security-advisory-2021-4032-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165099/red-hat-security-advisory-2021-4848-07.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2473"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166051/red-hat-security-advisory-2022-0580-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163637/ubuntu-security-notice-usn-5021-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166789/red-hat-security-advisory-2022-1396-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165758/red-hat-security-advisory-2022-0318-06.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1837"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166308/red-hat-security-advisory-2022-0842-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031104"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.4,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37136"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44228"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37137"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-21409"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1081"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23308"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9."
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5128"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20317"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43267"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5127"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4902"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3801"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4032"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "VULMON",
"id": "CVE-2021-22925"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "VULMON",
"id": "CVE-2021-22925"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-08-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381399"
},
{
"date": "2022-03-28T15:52:16",
"db": "PACKETSTORM",
"id": "166489"
},
{
"date": "2021-12-15T15:20:33",
"db": "PACKETSTORM",
"id": "165286"
},
{
"date": "2021-12-15T15:20:43",
"db": "PACKETSTORM",
"id": "165287"
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631"
},
{
"date": "2021-12-02T16:06:16",
"db": "PACKETSTORM",
"id": "165129"
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099"
},
{
"date": "2021-11-17T15:25:40",
"db": "PACKETSTORM",
"id": "165002"
},
{
"date": "2021-07-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202107-1582"
},
{
"date": "2021-08-05T21:15:11.467000",
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381399"
},
{
"date": "2023-06-05T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202107-1582"
},
{
"date": "2024-03-27T15:11:42.063000",
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Arch Linux Security hole",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
}
],
"trust": 0.6
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202107-1582"
}
],
"trust": 0.6
}
}
VAR-202206-1900
Vulnerability from variot - Updated: 2024-07-23 20:57curl < 7.84.0 supports "chained" HTTP compression algorithms, meaning that a serverresponse can be compressed multiple times and potentially with different algorithms. The number of acceptable "links" in this "decompression chain" was unbounded, allowing a malicious server to insert a virtually unlimited number of compression steps.The use of such a decompression chain could result in a "malloc bomb", makingcurl end up spending enormous amounts of allocated heap memory, or trying toand returning out of memory errors. Harry Sintonen incorrectly handled certain file permissions. An attacker could possibly use this issue to expose sensitive information. This issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207). Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud. Bugs fixed (https://bugzilla.redhat.com/):
2041540 - RHACM 2.4 using deprecated APIs in managed clusters 2074766 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes 2079418 - cluster update status is stuck, also update is not even visible 2088486 - Policy that creates cluster role is showing as not compliant due to Request entity too large message 2089490 - Upgraded from RHACM 2.2-->2.3-->2.4 and cannot create cluster 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2097464 - ACM Console Becomes Unusable After a Time 2100613 - RHACM 2.4.6 images 2102436 - Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster 2102495 - ManagedClusters in Pending import state after ACM hub migration 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2109354 - CVE-2022-31150 nodejs16: CRLF injection in node-undici 2121396 - CVE-2022-31151 nodejs/undici: Cookie headers uncleared on cross-origin redirect 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security fix:
- CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
Bug fixes:
-
Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)
-
RHACM 2.3.12 images (BZ# 2101411)
-
Bugs fixed (https://bugzilla.redhat.com/):
2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation 2101411 - RHACM 2.3.12 images 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: HTTP compression denial of service (CVE-2022-32206)
-
curl: HTTP multi-header compression denial of service (CVE-2023-23916)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service
-
Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
After the clusters are managed, you can use the APIs that are provided by the engine to distribute configuration based on placement policy. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: Gatekeeper Operator v0.2 security and container updates Advisory ID: RHSA-2022:6348-01 Product: Red Hat ACM Advisory URL: https://access.redhat.com/errata/RHSA-2022:6348 Issue date: 2022-09-06 CVE Names: CVE-2021-40528 CVE-2022-1292 CVE-2022-1586 CVE-2022-1705 CVE-2022-1962 CVE-2022-2068 CVE-2022-2097 CVE-2022-2526 CVE-2022-28131 CVE-2022-29824 CVE-2022-30629 CVE-2022-30630 CVE-2022-30631 CVE-2022-30632 CVE-2022-30633 CVE-2022-30635 CVE-2022-32148 CVE-2022-32206 CVE-2022-32208 =====================================================================
- Summary:
Gatekeeper Operator v0.2 security updates
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters.
This advisory contains the container images for Gatekeeper that include bug fixes and container upgrades.
Note: Gatekeeper support from the Red Hat support team is limited to where it is integrated and used with Red Hat Advanced Cluster Management for Kubernetes. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security fix:
-
CVE-2022-30629: gatekeeper-container: golang: crypto/tls: session tickets lack random ticket_age_add
-
CVE-2022-1705: golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962: golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131: golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630: golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631: golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632: golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30635: golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
-
Solution:
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
Upgrade gatekeeper operator: The gatekeeper operator that is installed by the gatekeeper operator policy has
installPlanApprovalset toAutomatic. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApprovaltomanual, then you must view each cluster to manually approve the upgrade to the operator. -
Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'.
Refer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/ for additional information.
- Bugs fixed (https://bugzilla.redhat.com/):
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
- References:
https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1705 https://access.redhat.com/security/cve/CVE-2022-1962 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-2526 https://access.redhat.com/security/cve/CVE-2022-28131 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30629 https://access.redhat.com/security/cve/CVE-2022-30630 https://access.redhat.com/security/cve/CVE-2022-30631 https://access.redhat.com/security/cve/CVE-2022-30632 https://access.redhat.com/security/cve/CVE-2022-30633 https://access.redhat.com/security/cve/CVE-2022-30635 https://access.redhat.com/security/cve/CVE-2022-32148 https://access.redhat.com/security/cve/CVE-2022-32206 https://access.redhat.com/security/cve/CVE-2022-32208 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYxd1LNzjgjWX9erEAQi7KxAAjtYnTUInhFC8FJ6zXunwhBa8YpT3E6Ym hemyRubgeyUdhySlgfPFmhrEU6nT3CUmzVN11wQu9iVmUzg3V/x+WhvMK371313m 7XzE0nuZ5uZRxXGVr8dqoecgm47t2884+QzGO4cMIsK5ojfHLBY6oeYunjW6lC5/ 7P40TjANWdZMirOmxoOk3OHeYpFC9oIiovidDn7zqf3PFOa50ux6w4P/3Dep5qVl W1BaNJkWxRL5Uj2AiyxtnLR2Tg713ocazkZZ83nJdr2eMoFFJL7l7u/W2m9LS5rN UhwHejs+4kizsumeCRFyq5I67vmkGE2EMun3yKZDGNB8xgxQqkaOBTkcF4qzzgOt +cLhTRiuGXS4NETqYaWGE0n0kmFCE5jFbZaOlp9L1C56LtB4Ob6BSK/qtdl8wmMB Ap8POcwOp/6TM2SfXg27TzYyYdA3T8EDG4NcZJ05Kt/QsEm7odWa8qMQrBLx+vBs AzDqEoMuL6yPuU4TfpmUI19M3kCGq3dK6jvMv7PA3xn2XQnBfxgIZv5ayibOoM+G 4zhJAs44wO9xEb95fVUego6k3PME3r4u2az8CGBNBcNb9S56yktm3cfxfJv9fc6T C0pfoeTNknLDqKXTCCd8q3qurIX1oX4YTYDjn7F9lrsSQb/b7cv09VliE8xJyg/m yZ5qSsVjpIw= =RV0+ -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . These flaws may allow remote attackers to obtain sensitive information, leak authentication or cookie header data or facilitate a denial of service attack.
For the stable distribution (bullseye), these problems have been fixed in version 7.74.0-1.3+deb11u2.
We recommend that you upgrade your curl packages.
For the detailed security status of curl please refer to its security tracker page at: https://security-tracker.debian.org/tracker/curl
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEErPPQiO8y7e9qGoNf2a0UuVE7UeQFAmLoBaNfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEFD RjNEMDg4RUYzMkVERUY2QTFBODM1RkQ5QUQxNEI5NTEzQjUxRTQACgkQ2a0UuVE7 UeTf9A//VWkco2gxCMMe8JDcL9sLD0B5L8KGRxbPBYmpE1l2kCpiW9QGVwCN3q2K i8xo0jmRxSwSXDmAE17aTtGT66vU8vQSHewty031TcvWKBoAJpKRTbazfdOy/vDD waofTEaUClFt3NNiR3gigRU6OFV/9MWlUWwCJ/Wgd5osJTQCyWV/iHz3FJluc1Gp rXamYLnWGUJbIZgMFEo7TqIyb91P0PrX4hpnCcnhvY4ci5NWOj2qaoWGhgF+f9gz Uao91GTOnuTyoY3apKzifdO5dih9zJttnRKUgHkn9YCGxanljoPjHRYOavWdN6bE yIpT/Xw2dy05Fzydb73bDurQP+mkyWGZA+S8gxtbY7S7OylRS9iHSfyUpAVEM/Ab SPkGQl6vBKr7dmyHkdIlbViste6kcmhQQete9E3tM18MkyK0NbBiUj+pShNPC+SF REStal14ZE+DSwFKp5UA8izEh0G5RC5VUVhB/jtoxym2rvmIamk5YqCS1rupGP9R 1Y+Jm8CywBrKHl5EzAVUswC5xDAArWdXRvrgHCeElnkwuCwRC8AgRiYFFRulWKwt TV5qveehnzSc2z5IDc/tdiPWNJhJu/blNN8BauG8zmJV4ZhZP9EO1FCLE7DpqQ38 EPtUTMXaMQR1W15He51auBQwJgSiX1II+5jh6PeZTKBKnJgLYNA= =3E71 -----END PGP SIGNATURE----- . Summary:
Red Hat OpenShift Virtualization release 4.11.1 is now available with updates to packages and images that fix several bugs and add enhancements. Description:
OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform.
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM 2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR 2091856 - ?Edit BootSource? action should have more explicit information when disabled 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2102694 - Fedora version in DataImportCrons is not 'latest' 2109407 - [4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted 2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls 2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based 2115371 - Unable to start windows VMs on PSI setups 2119613 - GiB changes to B in Template's Edit boot source reference modal 2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass 2128872 - [4.11]Can't restore cloned VM 2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 2129013 - Mark Windows 11 as TechPreview 2129235 - [RFE] Add "Copy SSH command" to VM action list 2134668 - Cannot edit ssh even vm is stopped 2139453 - 4.11.1 rpms
- This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release of Red Hat JBoss Core Services Apache HTTP Server 2.4.51 Service Pack 1 serves as a replacement for Red Hat JBoss Core Services Apache HTTP Server 2.4.51, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References.
Security Fix(es):
- libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)
- libxml2: dict corruption caused by entity reference cycles (CVE-2022-40304)
- expat: a use-after-free in the doContent function in xmlparse.c (CVE-2022-40674)
- zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field (CVE-2022-37434)
- curl: HSTS bypass via IDN (CVE-2022-42916)
- curl: HTTP proxy double-free (CVE-2022-42915)
- curl: POST following PUT confusion (CVE-2022-32221)
- httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism (CVE-2022-31813)
- httpd: mod_sed: DoS vulnerability (CVE-2022-30522)
- httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)
- httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)
- httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)
- curl: control code in cookie denial of service (CVE-2022-35252)
- zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field (CVE-2022-37434)
- jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)
- curl: Unpreserved file permissions (CVE-2022-32207)
- curl: various flaws (CVE-2022-32206 CVE-2022-32208)
- openssl: the c_rehash script allows command injection (CVE-2022-2068)
- openssl: c_rehash script allows command injection (CVE-2022-1292)
- jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody (CVE-2022-22721)
- jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds (CVE-2022-23943)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN 2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202206-1900",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "scalance sc632-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "scalance sc636-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "scalance sc622-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.84.0"
},
{
"model": "scalance sc646-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "scalance sc642-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "scalance sc626-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.84.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc622-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc622-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc626-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc626-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc632-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc632-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc636-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc636-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc642-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc642-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc646-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc646-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "168282"
},
{
"db": "PACKETSTORM",
"id": "170165"
}
],
"trust": 0.9
},
"cve": "CVE-2022-32206",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": true,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 6.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.8,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-32206",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202206-2565",
"trust": 0.6,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "curl \u003c 7.84.0 supports \"chained\" HTTP compression algorithms, meaning that a serverresponse can be compressed multiple times and potentially with different algorithms. The number of acceptable \"links\" in this \"decompression chain\" was unbounded, allowing a malicious server to insert a virtually unlimited number of compression steps.The use of such a decompression chain could result in a \"malloc bomb\", makingcurl end up spending enormous amounts of allocated heap memory, or trying toand returning out of memory errors. Harry Sintonen incorrectly handled certain file permissions. \nAn attacker could possibly use this issue to expose sensitive information. \nThis issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207). Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. Bugs fixed (https://bugzilla.redhat.com/):\n\n2041540 - RHACM 2.4 using deprecated APIs in managed clusters\n2074766 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2079418 - cluster update status is stuck, also update is not even visible\n2088486 - Policy that creates cluster role is showing as not compliant due to Request entity too large message\n2089490 - Upgraded from RHACM 2.2--\u003e2.3--\u003e2.4 and cannot create cluster\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2097464 - ACM Console Becomes Unusable After a Time\n2100613 - RHACM 2.4.6 images\n2102436 - Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster\n2102495 - ManagedClusters in Pending import state after ACM hub migration\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2109354 - CVE-2022-31150 nodejs16: CRLF injection in node-undici\n2121396 - CVE-2022-31151 nodejs/undici: Cookie headers uncleared on cross-origin redirect\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity fix:\n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\nBug fixes:\n\n* Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)\n\n* RHACM 2.3.12 images (BZ# 2101411)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation\n2101411 - RHACM 2.3.12 images\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: HTTP compression denial of service (CVE-2022-32206)\n\n* curl: HTTP multi-header compression denial of service (CVE-2023-23916)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service\n\n6. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. After the clusters are managed, you can use the APIs that\nare provided by the engine to distribute configuration based on placement\npolicy. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: Gatekeeper Operator v0.2 security and container updates\nAdvisory ID: RHSA-2022:6348-01\nProduct: Red Hat ACM\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6348\nIssue date: 2022-09-06\nCVE Names: CVE-2021-40528 CVE-2022-1292 CVE-2022-1586 \n CVE-2022-1705 CVE-2022-1962 CVE-2022-2068 \n CVE-2022-2097 CVE-2022-2526 CVE-2022-28131 \n CVE-2022-29824 CVE-2022-30629 CVE-2022-30630 \n CVE-2022-30631 CVE-2022-30632 CVE-2022-30633 \n CVE-2022-30635 CVE-2022-32148 CVE-2022-32206 \n CVE-2022-32208 \n=====================================================================\n\n1. Summary:\n\nGatekeeper Operator v0.2 security updates\n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. \n\nThis advisory contains the container images for Gatekeeper that include bug\nfixes and container upgrades. \n\nNote: Gatekeeper support from the Red Hat support team is limited to where\nit is integrated and used with Red Hat Advanced Cluster Management\nfor Kubernetes. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity fix:\n\n* CVE-2022-30629: gatekeeper-container: golang: crypto/tls: session tickets\nlack random ticket_age_add\n\n* CVE-2022-1705: golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962: golang: go/parser: stack exhaustion in all Parse*\nfunctions\n\n* CVE-2022-28131: golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630: golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631: golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632: golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30635: golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\n3. Solution:\n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n* Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n* Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. \n\nRefer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/\nfor additional information. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1705\nhttps://access.redhat.com/security/cve/CVE-2022-1962\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-2526\nhttps://access.redhat.com/security/cve/CVE-2022-28131\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30629\nhttps://access.redhat.com/security/cve/CVE-2022-30630\nhttps://access.redhat.com/security/cve/CVE-2022-30631\nhttps://access.redhat.com/security/cve/CVE-2022-30632\nhttps://access.redhat.com/security/cve/CVE-2022-30633\nhttps://access.redhat.com/security/cve/CVE-2022-30635\nhttps://access.redhat.com/security/cve/CVE-2022-32148\nhttps://access.redhat.com/security/cve/CVE-2022-32206\nhttps://access.redhat.com/security/cve/CVE-2022-32208\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxd1LNzjgjWX9erEAQi7KxAAjtYnTUInhFC8FJ6zXunwhBa8YpT3E6Ym\nhemyRubgeyUdhySlgfPFmhrEU6nT3CUmzVN11wQu9iVmUzg3V/x+WhvMK371313m\n7XzE0nuZ5uZRxXGVr8dqoecgm47t2884+QzGO4cMIsK5ojfHLBY6oeYunjW6lC5/\n7P40TjANWdZMirOmxoOk3OHeYpFC9oIiovidDn7zqf3PFOa50ux6w4P/3Dep5qVl\nW1BaNJkWxRL5Uj2AiyxtnLR2Tg713ocazkZZ83nJdr2eMoFFJL7l7u/W2m9LS5rN\nUhwHejs+4kizsumeCRFyq5I67vmkGE2EMun3yKZDGNB8xgxQqkaOBTkcF4qzzgOt\n+cLhTRiuGXS4NETqYaWGE0n0kmFCE5jFbZaOlp9L1C56LtB4Ob6BSK/qtdl8wmMB\nAp8POcwOp/6TM2SfXg27TzYyYdA3T8EDG4NcZJ05Kt/QsEm7odWa8qMQrBLx+vBs\nAzDqEoMuL6yPuU4TfpmUI19M3kCGq3dK6jvMv7PA3xn2XQnBfxgIZv5ayibOoM+G\n4zhJAs44wO9xEb95fVUego6k3PME3r4u2az8CGBNBcNb9S56yktm3cfxfJv9fc6T\nC0pfoeTNknLDqKXTCCd8q3qurIX1oX4YTYDjn7F9lrsSQb/b7cv09VliE8xJyg/m\nyZ5qSsVjpIw=\n=RV0+\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. These flaws may allow remote attackers to obtain sensitive\ninformation, leak authentication or cookie header data or facilitate a\ndenial of service attack. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 7.74.0-1.3+deb11u2. \n\nWe recommend that you upgrade your curl packages. \n\nFor the detailed security status of curl please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/curl\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEErPPQiO8y7e9qGoNf2a0UuVE7UeQFAmLoBaNfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEFD\nRjNEMDg4RUYzMkVERUY2QTFBODM1RkQ5QUQxNEI5NTEzQjUxRTQACgkQ2a0UuVE7\nUeTf9A//VWkco2gxCMMe8JDcL9sLD0B5L8KGRxbPBYmpE1l2kCpiW9QGVwCN3q2K\ni8xo0jmRxSwSXDmAE17aTtGT66vU8vQSHewty031TcvWKBoAJpKRTbazfdOy/vDD\nwaofTEaUClFt3NNiR3gigRU6OFV/9MWlUWwCJ/Wgd5osJTQCyWV/iHz3FJluc1Gp\nrXamYLnWGUJbIZgMFEo7TqIyb91P0PrX4hpnCcnhvY4ci5NWOj2qaoWGhgF+f9gz\nUao91GTOnuTyoY3apKzifdO5dih9zJttnRKUgHkn9YCGxanljoPjHRYOavWdN6bE\nyIpT/Xw2dy05Fzydb73bDurQP+mkyWGZA+S8gxtbY7S7OylRS9iHSfyUpAVEM/Ab\nSPkGQl6vBKr7dmyHkdIlbViste6kcmhQQete9E3tM18MkyK0NbBiUj+pShNPC+SF\nREStal14ZE+DSwFKp5UA8izEh0G5RC5VUVhB/jtoxym2rvmIamk5YqCS1rupGP9R\n1Y+Jm8CywBrKHl5EzAVUswC5xDAArWdXRvrgHCeElnkwuCwRC8AgRiYFFRulWKwt\nTV5qveehnzSc2z5IDc/tdiPWNJhJu/blNN8BauG8zmJV4ZhZP9EO1FCLE7DpqQ38\nEPtUTMXaMQR1W15He51auBQwJgSiX1II+5jh6PeZTKBKnJgLYNA=\n=3E71\n-----END PGP SIGNATURE-----\n. Summary:\n\nRed Hat OpenShift Virtualization release 4.11.1 is now available with\nupdates to packages and images that fix several bugs and add enhancements. Description:\n\nOpenShift Virtualization is Red Hat\u0027s virtualization solution designed for\nRed Hat OpenShift Container Platform. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2102694 - Fedora version in DataImportCrons is not \u0027latest\u0027\n2109407 - [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based\n2115371 - Unable to start windows VMs on PSI setups\n2119613 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129235 - [RFE] Add \"Copy SSH command\" to VM action list\n2134668 - Cannot edit ssh even vm is stopped\n2139453 - 4.11.1 rpms\n\n5. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release of Red Hat JBoss Core Services Apache HTTP Server 2.4.51\nService Pack 1 serves as a replacement for Red Hat JBoss Core Services\nApache HTTP Server 2.4.51, and includes bug fixes and enhancements, which\nare documented in the Release Notes document linked to in the References. \n\nSecurity Fix(es):\n\n* libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)\n* libxml2: dict corruption caused by entity reference cycles\n(CVE-2022-40304)\n* expat: a use-after-free in the doContent function in xmlparse.c\n(CVE-2022-40674)\n* zlib: a heap-based buffer over-read or buffer overflow in inflate in\ninflate.c via a large gzip header extra field (CVE-2022-37434)\n* curl: HSTS bypass via IDN (CVE-2022-42916)\n* curl: HTTP proxy double-free (CVE-2022-42915)\n* curl: POST following PUT confusion (CVE-2022-32221)\n* httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n(CVE-2022-31813)\n* httpd: mod_sed: DoS vulnerability (CVE-2022-30522)\n* httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)\n* httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)\n* httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)\n* curl: control code in cookie denial of service (CVE-2022-35252)\n* zlib: a heap-based buffer over-read or buffer overflow in inflate in\ninflate.c via a large gzip header extra field (CVE-2022-37434)\n* jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)\n* curl: Unpreserved file permissions (CVE-2022-32207)\n* curl: various flaws (CVE-2022-32206 CVE-2022-32208)\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n* jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large\nor unlimited LimitXMLRequestBody (CVE-2022-22721)\n* jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds\n(CVE-2022-23943)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
},
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "169318"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "168282"
},
{
"db": "PACKETSTORM",
"id": "170165"
}
],
"trust": 1.89
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-32206",
"trust": 2.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2023/02/15/3",
"trust": 1.6
},
{
"db": "HACKERONE",
"id": "1570651",
"trust": 1.6
},
{
"db": "SIEMENS",
"id": "SSA-333517",
"trust": 1.6
},
{
"db": "PACKETSTORM",
"id": "168347",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.3366",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6290",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4468",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4757",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3238",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4324",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5247",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4266",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4112",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3117",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5632",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.2163",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5300",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4525",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4568",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168284",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "170166",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167607",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168301",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168174",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168503",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "169443",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022071152",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062927",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2022-32206",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168265",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168538",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172765",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168280",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169318",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170083",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168282",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170165",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "169318"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "168282"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"id": "VAR-202206-1900",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.53838384
},
"last_update_date": "2024-07-23T20:57:34.431000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "curl Remediation of resource management error vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=198520"
},
{
"title": "Ubuntu Security Notice: USN-5495-1: curl vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5495-1"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-770",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.6,
"url": "https://hackerone.com/reports/1570651"
},
{
"trust": 1.6,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.6,
"url": "http://www.openwall.com/lists/oss-security/2023/02/15/3"
},
{
"trust": 1.6,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20220915-0003/"
},
{
"trust": 1.6,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-333517.pdf"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.6,
"url": "http://seclists.org/fulldisclosure/2022/oct/28"
},
{
"trust": 1.6,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.6,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.9,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.9,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-denial-of-service-via-http-compression-38671"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062927"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213488"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168347/red-hat-security-advisory-2022-6422-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6290"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168301/red-hat-security-advisory-2022-6287-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168174/red-hat-security-advisory-2022-6157-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4112"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5300"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170166/red-hat-security-advisory-2022-8840-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168378/red-hat-security-advisory-2022-6507-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5247"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3366"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168503/red-hat-security-advisory-2022-6560-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4757"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167607/ubuntu-security-notice-usn-5495-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.2163"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022071152"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3238"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168284/red-hat-security-advisory-2022-6183-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4266"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-32206/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5632"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4468"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4324"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4525"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169443/red-hat-security-advisory-2022-7058-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3117"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-36067"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30630"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5495-1"
},
{
"trust": 0.1,
"url": "https://submariner.io/getting-started/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6346"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://submariner.io/."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/add-ons/submariner#submariner-deploy-console"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31150"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31151"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1966"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1966"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:3460"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6422"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/multicluster_engine/index#installing-while-connected-online"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-36067"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6348"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9."
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/curl"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6345"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8841"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "169318"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "168282"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "169318"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "168282"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-09-07T16:37:33",
"db": "PACKETSTORM",
"id": "168265"
},
{
"date": "2022-09-27T16:01:00",
"db": "PACKETSTORM",
"id": "168538"
},
{
"date": "2022-09-01T16:30:25",
"db": "PACKETSTORM",
"id": "168213"
},
{
"date": "2023-06-06T17:04:24",
"db": "PACKETSTORM",
"id": "172765"
},
{
"date": "2022-09-13T15:29:12",
"db": "PACKETSTORM",
"id": "168347"
},
{
"date": "2022-09-07T16:53:57",
"db": "PACKETSTORM",
"id": "168280"
},
{
"date": "2022-08-28T19:12:00",
"db": "PACKETSTORM",
"id": "169318"
},
{
"date": "2022-12-02T15:57:08",
"db": "PACKETSTORM",
"id": "170083"
},
{
"date": "2022-09-07T16:56:15",
"db": "PACKETSTORM",
"id": "168282"
},
{
"date": "2022-12-08T21:28:21",
"db": "PACKETSTORM",
"id": "170165"
},
{
"date": "2022-06-27T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"date": "2022-07-07T13:15:08.340000",
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"date": "2024-03-27T15:00:54.267000",
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "169318"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "curl Resource Management Error Vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
],
"trust": 0.6
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
],
"trust": 0.6
}
}
VAR-202109-1790
Vulnerability from variot - Updated: 2024-07-23 20:50A user can tell curl >= 7.20.0 and <= 7.78.0 to require a successful upgrade to TLS when speaking to an IMAP, POP3 or FTP server (--ssl-reqd on the command line orCURLOPT_USE_SSL set to CURLUSESSL_CONTROL or CURLUSESSL_ALL withlibcurl). This requirement could be bypassed if the server would return a properly crafted but perfectly legitimate response.This flaw would then make curl silently continue its operations withoutTLS contrary to the instructions and expectations, exposing possibly sensitive data in clear text over the network. A security issue was found in curl prior to 7.79.0. Description:
Red Hat 3scale API Management delivers centralized API management features through a distributed, cloud-hosted layer. It includes built-in features to help in building a more successful API program, including access control, rate limits, payment gateway integration, and developer experience tools. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index
- Bugs fixed (https://bugzilla.redhat.com/):
1912487 - CVE-2020-26247 rubygem-nokogiri: XML external entity injection via Nokogiri::XML::Schema
- JIRA issues fixed (https://issues.jboss.org/):
THREESCALE-6868 - [3scale][2.11][LO-prio] Improve select default Application plan THREESCALE-6879 - [3scale][2.11][HI-prio] Add 'Create new Application' flow to Product > Applications index THREESCALE-7030 - Address scalability in 'Create new Application' form THREESCALE-7203 - Fix Zync resync command in 5.6.9. Creating equivalent Zync routes THREESCALE-7475 - Some api calls result in "Destroying user session" THREESCALE-7488 - Ability to add external Lua dependencies for custom policies THREESCALE-7573 - Enable proxy environment variables via the APICAST CRD THREESCALE-7605 - type change of "policies_config" in /admin/api/services/{service_id}/proxy.json THREESCALE-7633 - Signup form in developer portal is disabled for users authenticted via external SSO THREESCALE-7644 - Metrics: Service for 3scale operator is missing THREESCALE-7646 - Cleanup/refactor Products and Backends index logic THREESCALE-7648 - Remove "#context-menu" from the url THREESCALE-7704 - Images based on RHEL 7 should contain at least ca-certificates-2021.2.50-72.el7_9.noarch.rpm THREESCALE-7731 - Reenable operator metrics service for apicast-operator THREESCALE-7761 - 3scale Operator doesn't respect *_proxy env vars THREESCALE-7765 - Remove MessageBus from System THREESCALE-7834 - admin can't create application when developer is not allowed to pick a plan THREESCALE-7863 - Update some Obsolete API's in 3scale_v2.js THREESCALE-7884 - Service top application endpoint is not working properly THREESCALE-7912 - ServiceMonitor created by monitoring showing HTTP 400 error THREESCALE-7913 - ServiceMonitor for 3scale operator has wide selector
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS EUS (v. 8.2) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP. Package List:
Red Hat Enterprise Linux BaseOS EUS (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- ========================================================================== Ubuntu Security Notice USN-5079-3 September 21, 2021
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 18.04 LTS
Summary:
USN-5079-1 introduced a regression in curl. One of the fixes introduced a regression on Ubuntu 18.04 LTS. This update fixes the problem.
We apologize for the inconvenience.
Original advisory details:
It was discovered that curl incorrect handled memory when sending data to an MQTT server. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2021-22945) Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946) Patrick Monnerat discovered that curl incorrectly handled responses received before STARTTLS. A remote attacker could possibly use this issue to inject responses and intercept communications. (CVE-2021-22947)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.16 libcurl3-gnutls 7.58.0-2ubuntu3.16 libcurl3-nss 7.58.0-2ubuntu3.16 libcurl4 7.58.0-2ubuntu3.16
In general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1858 - OpenShift Alerting Rules Style-Guide Compliance LOG-1917 - [release-5.1] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
- Bugs fixed (https://bugzilla.redhat.com/):
1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic 2016256 - Release of OpenShift Serverless Eventing 1.19.0 2016258 - Release of OpenShift Serverless Serving 1.19.0
- Description:
Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. STF then transmits the information to a centralized, receiving Red Hat OpenShift Container Platform (OCP) deployment for storage, retrieval, and monitoring. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
-
Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202109-1790",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "communications cloud native core network slice selection function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.8.0"
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "oncommand insight",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.3"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.59"
},
{
"model": "sinec infrastructure network services",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0.1.1"
},
{
"model": "snapcenter",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.20.0"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.11.0"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.10.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "oncommand workflow automation",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.79.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.0"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"model": "communications cloud native core console",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "commerce guided search",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.3.2"
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "communications cloud native core service communication proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "communications cloud native core security edge protection proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.35"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.1"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.79.0",
"versionStartIncluding": "7.20.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:solidfire_baseboard_management_controller:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.26",
"versionStartIncluding": "8.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "5.7.35",
"versionStartIncluding": "5.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_slice_selection_function:1.8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:1.15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_function_cloud_native_environment:1.10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_service_communication_proxy:1.15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:1.15.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:1.11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0.1.1",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:commerce_guided_search:11.3.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_security_edge_protection_proxy:22.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_console:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "PACKETSTORM",
"id": "168011"
}
],
"trust": 0.6
},
"cve": "CVE-2021-22946",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-381420",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-22946",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-381420",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "A user can tell curl \u003e= 7.20.0 and \u003c= 7.78.0 to require a successful upgrade to TLS when speaking to an IMAP, POP3 or FTP server (`--ssl-reqd` on the command line or`CURLOPT_USE_SSL` set to `CURLUSESSL_CONTROL` or `CURLUSESSL_ALL` withlibcurl). This requirement could be bypassed if the server would return a properly crafted but perfectly legitimate response.This flaw would then make curl silently continue its operations **withoutTLS** contrary to the instructions and expectations, exposing possibly sensitive data in clear text over the network. A security issue was found in curl prior to 7.79.0. Description:\n\nRed Hat 3scale API Management delivers centralized API management features\nthrough a distributed, cloud-hosted layer. It includes built-in features to\nhelp in building a more successful API program, including access control,\nrate limits, payment gateway integration, and developer experience tools. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1912487 - CVE-2020-26247 rubygem-nokogiri: XML external entity injection via Nokogiri::XML::Schema\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nTHREESCALE-6868 - [3scale][2.11][LO-prio] Improve select default Application plan\nTHREESCALE-6879 - [3scale][2.11][HI-prio] Add \u0027Create new Application\u0027 flow to Product \u003e Applications index\nTHREESCALE-7030 - Address scalability in \u0027Create new Application\u0027 form\nTHREESCALE-7203 - Fix Zync resync command in 5.6.9. Creating equivalent Zync routes\nTHREESCALE-7475 - Some api calls result in \"Destroying user session\"\nTHREESCALE-7488 - Ability to add external Lua dependencies for custom policies\nTHREESCALE-7573 - Enable proxy environment variables via the APICAST CRD\nTHREESCALE-7605 - type change of \"policies_config\" in /admin/api/services/{service_id}/proxy.json\nTHREESCALE-7633 - Signup form in developer portal is disabled for users authenticted via external SSO\nTHREESCALE-7644 - Metrics: Service for 3scale operator is missing\nTHREESCALE-7646 - Cleanup/refactor Products and Backends index logic\nTHREESCALE-7648 - Remove \"#context-menu\" from the url\nTHREESCALE-7704 - Images based on RHEL 7 should contain at least ca-certificates-2021.2.50-72.el7_9.noarch.rpm\nTHREESCALE-7731 - Reenable operator metrics service for apicast-operator\nTHREESCALE-7761 - 3scale Operator doesn\u0027t respect *_proxy env vars\nTHREESCALE-7765 - Remove MessageBus from System\nTHREESCALE-7834 - admin can\u0027t create application when developer is not allowed to pick a plan\nTHREESCALE-7863 - Update some Obsolete API\u0027s in 3scale_v2.js\nTHREESCALE-7884 - Service top application endpoint is not working properly\nTHREESCALE-7912 - ServiceMonitor created by monitoring showing HTTP 400 error\nTHREESCALE-7913 - ServiceMonitor for 3scale operator has wide selector\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS EUS (v. 8.2) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. Package List:\n\nRed Hat Enterprise Linux BaseOS EUS (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. ==========================================================================\nUbuntu Security Notice USN-5079-3\nSeptember 21, 2021\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.04 LTS\n\nSummary:\n\nUSN-5079-1 introduced a regression in curl. One of the fixes introduced a\nregression on Ubuntu 18.04 LTS. This update fixes the problem. \n\nWe apologize for the inconvenience. \n\nOriginal advisory details:\n\n It was discovered that curl incorrect handled memory when sending data to\n an MQTT server. A remote attacker could use this issue to cause curl to\n crash, resulting in a denial of service, or possibly execute arbitrary\n code. (CVE-2021-22945)\n Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946)\n Patrick Monnerat discovered that curl incorrectly handled responses\n received before STARTTLS. A remote attacker could possibly use this issue\n to inject responses and intercept communications. (CVE-2021-22947)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.16\n libcurl3-gnutls 7.58.0-2ubuntu3.16\n libcurl3-nss 7.58.0-2ubuntu3.16\n libcurl4 7.58.0-2ubuntu3.16\n\nIn general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1858 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1917 - [release-5.1] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n2016256 - Release of OpenShift Serverless Eventing 1.19.0\n2016258 - Release of OpenShift Serverless Serving 1.19.0\n\n5. Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. STF then transmits the information to a\ncentralized, receiving Red Hat OpenShift Container Platform (OCP)\ndeployment for storage, retrieval, and monitoring. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22946"
},
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "PACKETSTORM",
"id": "168011"
},
{
"db": "PACKETSTORM",
"id": "170303"
}
],
"trust": 1.8
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-22946",
"trust": 2.0
},
{
"db": "SIEMENS",
"id": "SSA-389290",
"trust": 1.1
},
{
"db": "HACKERONE",
"id": "1334111",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "165053",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165337",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "164993",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166112",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164740",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166319",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164948",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-381420",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22946",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166279",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164220",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168011",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "PACKETSTORM",
"id": "168011"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"id": "VAR-202109-1790",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T20:50:39.175000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-22946 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22946"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-319",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.2,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20211029-0003/"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220121-0008/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213183"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/mar/29"
},
{
"trust": 1.1,
"url": "https://hackerone.com/reports/1334111"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2021/09/msg00022.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36385"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-0512"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13050"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9925"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9802"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-30762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9895"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8625"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8812"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3899"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8819"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3867"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8720"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9893"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8808"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3902"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3900"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-30761"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3537"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9805"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8820"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9807"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8769"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8710"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8813"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9850"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8811"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9803"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9862"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3885"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-15503"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-10018"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8835"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2017-14502"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8844"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3865"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-1730"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3864"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3520"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14391"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3862"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3901"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8823"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3518"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-1000858"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3895"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-11793"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20454"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9894"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8816"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9843"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13627"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8771"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3897"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9806"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8814"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14889"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8743"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9915"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8815"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8783"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20807"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3516"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-9952"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3517"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20305"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8766"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3868"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8846"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-3894"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-30666"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q3/167"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26247"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26247"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3449"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0055"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25660"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21684"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0056"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39226"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0532"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25677"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0635"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-3"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.16"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/bugs/1944120"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4628"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "PACKETSTORM",
"id": "168011"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165337"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "PACKETSTORM",
"id": "168011"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-09-29T00:00:00",
"db": "VULHUB",
"id": "VHN-381420"
},
{
"date": "2021-12-17T14:04:30",
"db": "PACKETSTORM",
"id": "165337"
},
{
"date": "2022-03-11T16:38:38",
"db": "PACKETSTORM",
"id": "166279"
},
{
"date": "2022-02-23T13:41:41",
"db": "PACKETSTORM",
"id": "166112"
},
{
"date": "2021-09-21T15:39:10",
"db": "PACKETSTORM",
"id": "164220"
},
{
"date": "2021-11-17T15:07:42",
"db": "PACKETSTORM",
"id": "164993"
},
{
"date": "2021-11-23T17:10:05",
"db": "PACKETSTORM",
"id": "165053"
},
{
"date": "2022-08-09T14:36:05",
"db": "PACKETSTORM",
"id": "168011"
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303"
},
{
"date": "2021-09-29T20:15:08.187000",
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381420"
},
{
"date": "2024-03-27T15:12:52.090000",
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "168011"
}
],
"trust": 0.2
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2021-5191-02",
"sources": [
{
"db": "PACKETSTORM",
"id": "165337"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "csrf",
"sources": [
{
"db": "PACKETSTORM",
"id": "166279"
}
],
"trust": 0.1
}
}
VAR-202105-1306
Vulnerability from variot - Updated: 2024-07-23 20:49The mq_notify function in the GNU C Library (aka glibc) versions 2.32 and 2.33 has a use-after-free. It may use the notification thread attributes object (passed through its struct sigevent parameter) after it has been freed by the caller, leading to a denial of service (application crash) or possibly unspecified other impact. GNU C Library ( alias glibc) Is vulnerable to the use of freed memory.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. The vulnerability stems from the library's mq_notify function having a use-after-free feature. Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1897 - Applying cluster state is causing elasticsearch to hit an issue and become unusable LOG-1925 - [release-5.3] No datapoint for CPU on openshift-logging dashboard LOG-1962 - [release-5.3] CLO panic: runtime error: slice bounds out of range [:-1]
- Solution:
OSP 16.2.z Release - OSP Director Operator Containers
- Bugs fixed (https://bugzilla.redhat.com/):
2025995 - Rebase tech preview on latest upstream v1.2.x branch 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2036784 - osp controller (fencing enabled) in downed state after system manual crash test
Clusters and applications are all visible and managed from a single console — with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/
Security updates:
-
object-path: Type confusion vulnerability can lead to a bypass of CVE-2020-15256 (CVE-2021-23434)
-
follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
Related bugs:
-
RHACM 2.2.11 images (Bugzilla #2029508)
-
ClusterImageSet has 4.5 which is not supported in ACM 2.2.10 (Bugzilla
2030859)
- Bugs fixed (https://bugzilla.redhat.com/):
1999810 - CVE-2021-23434 object-path: Type confusion vulnerability can lead to a bypass of CVE-2020-15256 2029508 - RHACM 2.2.11 images 2030859 - ClusterImageSet has 4.5 which is not supported in ACM 2.2.10 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor
- Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):
2050826 - CVE-2022-24348 gitops: Path traversal and dereference of symlinks when passing Helm value files
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: ACS 3.67 security and enhancement update Advisory ID: RHSA-2021:4902-01 Product: RHACS Advisory URL: https://access.redhat.com/errata/RHSA-2021:4902 Issue date: 2021-12-01 CVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 =====================================================================
- Summary:
Updated images are now available for Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:
OpenShift Dedicated support
RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform.
-
Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS.
-
Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds.
-
Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.
Security Fix(es):
-
civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API (CVE-2020-27304)
-
nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
-
nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)
-
golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet (CVE-2021-29923)
-
helm: information disclosure vulnerability (CVE-2021-32690)
-
golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fixes The release of RHACS 3.67 includes the following bug fixes:
-
Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed.
-
Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.
System changes The release of RHACS 3.67 includes the following system changes:
- Scanner now identifies vulnerabilities in Ubuntu 21.10 images.
- The Port exposure method policy criteria now include route as an exposure method.
- The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation.
- The OpenShift Compliance Operator integration now supports using TailoredProfiles.
- The RHACS Jenkins plugin now provides additional security information.
- When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values.
- The default uid:gid pair for the Scanner image is now 65534:65534.
- RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes.
- If microdnf is part of an image or shows up in process execution, RHACS reports it as a security violation for the Red Hat Package Manager in Image or the Red Hat Package Manager Execution security policies.
- In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode.
- You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
-
You can now use a regular expression for the deployment name while specifying policy exclusions
-
Solution:
To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67.
- Bugs fixed (https://bugzilla.redhat.com/):
1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API
- JIRA issues fixed (https://issues.jboss.org/):
RHACS-65 - Release RHACS 3.67.0
- References:
https://access.redhat.com/security/cve/CVE-2018-20673 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-27304 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3801 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-29923 https://access.redhat.com/security/cve/CVE-2021-32690 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-39293 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr Kjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w tKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e lq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV x4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2 e8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK qnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz vguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt G4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT PTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/ pJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN T0pPNmsPGZY= =ux5P -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
An update is now available for OpenShift Logging 5.2. Description:
Openshift Logging Bug Fix Release (5.2.3)
Security Fix(es):
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1857 - OpenShift Alerting Rules Style-Guide Compliance LOG-1904 - [release-5.2] Fix the Display of ClusterLogging type in OLM LOG-1916 - [release-5.2] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
- Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202105-1306",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "e-series santricity os controller",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.70.1"
},
{
"model": "glibc",
"scope": "eq",
"trust": 1.0,
"vendor": "gnu",
"version": "2.32"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "glibc",
"scope": "eq",
"trust": 1.0,
"vendor": "gnu",
"version": "2.33"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "e-series santricity os controller",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "c library",
"scope": "eq",
"trust": 0.8,
"vendor": "gnu",
"version": "2.32"
},
{
"model": "c library",
"scope": "eq",
"trust": 0.8,
"vendor": "gnu",
"version": null
},
{
"model": "c library",
"scope": "eq",
"trust": 0.8,
"vendor": "gnu",
"version": "2.33"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:gnu:glibc:2.33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:gnu:glibc:2.32:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:e-series_santricity_os_controller:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "11.70.1",
"versionStartIncluding": "11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165288"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166308"
},
{
"db": "PACKETSTORM",
"id": "166309"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165099"
}
],
"trust": 0.8
},
"cve": "CVE-2021-33574",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 7.5,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "HIGH",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Partial",
"baseScore": 7.5,
"confidentialityImpact": "Partial",
"exploitabilityScore": null,
"id": "CVE-2021-33574",
"impactScore": null,
"integrityImpact": "Partial",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "High",
"trust": 0.9,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 7.5,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-393646",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "HIGH",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 9.8,
"baseSeverity": "Critical",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-33574",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-33574",
"trust": 1.8,
"value": "CRITICAL"
},
{
"author": "CNNVD",
"id": "CNNVD-202105-1666",
"trust": 0.6,
"value": "CRITICAL"
},
{
"author": "VULHUB",
"id": "VHN-393646",
"trust": 0.1,
"value": "HIGH"
},
{
"author": "VULMON",
"id": "CVE-2021-33574",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "The mq_notify function in the GNU C Library (aka glibc) versions 2.32 and 2.33 has a use-after-free. It may use the notification thread attributes object (passed through its struct sigevent parameter) after it has been freed by the caller, leading to a denial of service (application crash) or possibly unspecified other impact. GNU C Library ( alias glibc) Is vulnerable to the use of freed memory.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. The vulnerability stems from the library\u0027s mq_notify function having a use-after-free feature. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1897 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\nLOG-1925 - [release-5.3] No datapoint for CPU on openshift-logging dashboard\nLOG-1962 - [release-5.3] CLO panic: runtime error: slice bounds out of range [:-1]\n\n6. Solution:\n\nOSP 16.2.z Release - OSP Director Operator Containers\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2025995 - Rebase tech preview on latest upstream v1.2.x branch\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2036784 - osp controller (fencing enabled) in downed state after system manual crash test\n\n5. \n\nClusters and applications are all visible and managed from a single console\n\u2014 with security policy built in. See the following Release Notes documentation, which\nwill be updated shortly for this release, for additional details about this\nrelease:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/\n\nSecurity updates:\n\n* object-path: Type confusion vulnerability can lead to a bypass of\nCVE-2020-15256 (CVE-2021-23434)\n\n* follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\nRelated bugs: \n\n* RHACM 2.2.11 images (Bugzilla #2029508)\n\n* ClusterImageSet has 4.5 which is not supported in ACM 2.2.10 (Bugzilla\n#2030859)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1999810 - CVE-2021-23434 object-path: Type confusion vulnerability can lead to a bypass of CVE-2020-15256\n2029508 - RHACM 2.2.11 images\n2030859 - ClusterImageSet has 4.5 which is not supported in ACM 2.2.10\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n\n5. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):\n\n2050826 - CVE-2022-24348 gitops: Path traversal and dereference of symlinks when passing Helm value files\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: ACS 3.67 security and enhancement update\nAdvisory ID: RHSA-2021:4902-01\nProduct: RHACS\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4902\nIssue date: 2021-12-01\nCVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 \n CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 \n CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 \n CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 \n CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 \n CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 \n CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 \n CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 \n CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 \n CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 \n CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 \n CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 \n CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 \n CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 \n=====================================================================\n\n1. Summary:\n\nUpdated images are now available for Red Hat Advanced Cluster Security for\nKubernetes (RHACS). \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. \n\n1. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. \n\n2. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. \n\n3. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nSecurity Fix(es):\n\n* civetweb: directory traversal when using the built-in example HTTP\nform-based file upload mechanism via the mg_handle_form_request API\n(CVE-2020-27304)\n\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n\n* nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)\n\n* golang: net: incorrect parsing of extraneous zero characters at the\nbeginning of an IP address octet (CVE-2021-29923)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. \n\n2. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. \n2. The Port exposure method policy criteria now include route as an\nexposure method. \n3. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. \n4. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. \n5. The RHACS Jenkins plugin now provides additional security information. \n6. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. \n7. The default uid:gid pair for the Scanner image is now 65534:65534. \n8. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. \n9. If microdnf is part of an image or shows up in process execution, RHACS\nreports it as a security violation for the Red Hat Package Manager in Image\nor the Red Hat Package Manager Execution security policies. \n10. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. \n11. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20673\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-27304\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3801\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-29923\nhttps://access.redhat.com/security/cve/CVE-2021-32690\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-39293\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr\nKjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w\ntKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e\nlq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV\nx4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2\ne8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK\nqnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz\nvguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt\nG4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT\nPTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/\npJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN\nT0pPNmsPGZY=\n=ux5P\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nAn update is now available for OpenShift Logging 5.2. Description:\n\nOpenshift Logging Bug Fix Release (5.2.3)\n\nSecurity Fix(es):\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1857 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1904 - [release-5.2] Fix the Display of ClusterLogging type in OLM\nLOG-1916 - [release-5.2] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\n\n6. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-33574"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "PACKETSTORM",
"id": "165288"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166308"
},
{
"db": "PACKETSTORM",
"id": "166309"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165099"
}
],
"trust": 2.52
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-33574",
"trust": 3.4
},
{
"db": "PACKETSTORM",
"id": "166308",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166051",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165758",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163406",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165862",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164863",
"trust": 0.7
},
{
"db": "CS-HELP",
"id": "SB2021092807",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021070604",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021100416",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3935",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4254",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0394",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3785",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4095",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4019",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3905",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4229",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4059",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5140",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3214",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0245",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3336",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0716",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1071",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0493",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3398",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-393646",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-33574",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165288",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166309",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165129",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165002",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "PACKETSTORM",
"id": "165288"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166308"
},
{
"db": "PACKETSTORM",
"id": "166309"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"id": "VAR-202105-1306",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T20:49:26.394000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Bug\u00a027896",
"trust": 0.8,
"url": "https://sourceware.org/bugzilla/show_bug.cgi?id=27896"
},
{
"title": "Debian CVElist Bug Report Logs: glibc: CVE-2021-33574: mq_notify does not handle separately allocated thread attributes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=7a9966ec919351d3328669aa69ea5e39"
},
{
"title": "Red Hat: CVE-2021-33574",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-33574"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1736",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1736"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-33574 log"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless 1.20.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220434 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift distributed tracing 2.1.0 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220318 - security advisory"
},
{
"title": "Red Hat: Important: Release of containers for OSP 16.2 director operator tech preview",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220842 - security advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220580 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220856 - security advisory"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/live-hack-cve/cve-2021-33574 "
},
{
"title": "CVE-2021-33574",
"trust": 0.1,
"url": "https://github.com/jamesgeee/cve-2021-33574 "
},
{
"title": "cks-notes",
"trust": 0.1,
"url": "https://github.com/ruzickap/cks-notes "
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/live-hack-cve/cve-2021-38604 "
},
{
"title": "ochacafe-s5-3",
"trust": 0.1,
"url": "https://github.com/oracle-japan/ochacafe-s5-3 "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-416",
"trust": 1.1
},
{
"problemtype": "Use of freed memory (CWE-416) [NVD Evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20210629-0005/"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202107-07"
},
{
"trust": 1.7,
"url": "https://sourceware.org/bugzilla/show_bug.cgi?id=27896"
},
{
"trust": 1.7,
"url": "https://sourceware.org/bugzilla/show_bug.cgi?id=27896#c1"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2022/10/msg00021.html"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/kjyyimddyohtp2porlabtohyqyyrezdd/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rbuuwugxvilqxvweou7n42ichpjnaeup/"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.8,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rbuuwugxvilqxvweou7n42ichpjnaeup/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/kjyyimddyohtp2porlabtohyqyyrezdd/"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6526524"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1071"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4019"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3398"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165862/red-hat-security-advisory-2022-0434-05.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5140"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/glibc-use-after-free-via-mq-notify-35692"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3336"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3214"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0716"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021092807"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0394"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0493"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3935"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164863/red-hat-security-advisory-2021-4358-03.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166051/red-hat-security-advisory-2022-0580-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021070604"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021100416"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3785"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165758/red-hat-security-advisory-2022-0318-06.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163406/gentoo-linux-security-advisory-202107-07.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166308/red-hat-security-advisory-2022-0842-01.html"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.3,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3984"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4193"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4122"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3872"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4019"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4192"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-40346"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-39241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5129"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37136"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44228"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37137"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20317"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21409"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43267"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3572"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0842"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0465"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0185"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22942"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3564"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0466"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0856"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25214"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0465"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3752"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3573"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25214"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0920"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24348"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44790"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4902"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3801"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4032"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "PACKETSTORM",
"id": "165288"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166308"
},
{
"db": "PACKETSTORM",
"id": "166309"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"db": "PACKETSTORM",
"id": "165288"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166308"
},
{
"db": "PACKETSTORM",
"id": "166309"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-05-25T00:00:00",
"db": "VULHUB",
"id": "VHN-393646"
},
{
"date": "2021-05-25T00:00:00",
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"date": "2021-08-19T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"date": "2021-12-15T15:22:36",
"db": "PACKETSTORM",
"id": "165288"
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631"
},
{
"date": "2022-03-15T15:41:45",
"db": "PACKETSTORM",
"id": "166308"
},
{
"date": "2022-03-15T15:44:21",
"db": "PACKETSTORM",
"id": "166309"
},
{
"date": "2022-02-18T16:37:39",
"db": "PACKETSTORM",
"id": "166051"
},
{
"date": "2021-12-02T16:06:16",
"db": "PACKETSTORM",
"id": "165129"
},
{
"date": "2021-11-17T15:25:40",
"db": "PACKETSTORM",
"id": "165002"
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099"
},
{
"date": "2021-05-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"date": "2021-05-25T22:15:10.410000",
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-11-08T00:00:00",
"db": "VULHUB",
"id": "VHN-393646"
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"date": "2021-08-19T01:48:00",
"db": "JVNDB",
"id": "JVNDB-2021-002276"
},
{
"date": "2022-10-18T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"date": "2023-11-07T03:35:52.810000",
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "GNU\u00a0C\u00a0Library\u00a0 Vulnerabilities in the use of freed memory",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-002276"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
}
],
"trust": 0.6
}
}
VAR-202210-0997
Vulnerability from variot - Updated: 2024-07-23 20:33An issue was discovered in libxml2 before 2.10.3. When parsing a multi-gigabyte XML document with the XML_PARSE_HUGE parser option enabled, several integer counters can overflow. This results in an attempt to access an array at a negative 2GB offset, typically leading to a segmentation fault. xmlsoft.org of libxml2 Products from other vendors contain integer overflow vulnerabilities.Service operation interruption (DoS) It may be in a state. libxml2 is an open source library for parsing XML documents. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements.
CVE-2022-40304
Ned Williamson and Nathan Wachholz discovered a vulnerability when
handling detection of entity reference cycles, which may result in
corrupted dictionary entries. This flaw may lead to logic errors,
including memory errors like double free flaws.
For the stable distribution (bullseye), these problems have been fixed in version 2.9.10+dfsg-6.7+deb11u3.
We recommend that you upgrade your libxml2 packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-39
https://security.gentoo.org/
Severity: High Title: libxml2: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #877149 ID: 202210-39
Synopsis
Multiple vulnerabilities have been found in libxml2, the worst of which could result in arbitrary code execution.
Background
libxml2 is the XML C parser and toolkit developed for the GNOME project.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/libxml2 < 2.10.3 >= 2.10.3
Description
Multiple vulnerabilities have been discovered in libxml2. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All libxml2 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/libxml2-2.10.3"
References
[ 1 ] CVE-2022-40303 https://nvd.nist.gov/vuln/detail/CVE-2022-40303 [ 2 ] CVE-2022-40304 https://nvd.nist.gov/vuln/detail/CVE-2022-40304
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-39
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . Description:
Version 1.27.0 of the OpenShift Serverless Operator is supported on Red Hat OpenShift Container Platform versions 4.8, 4.9, 4.10, 4.11 and 4.12.
This release includes security and bug fixes, and enhancements. Bugs fixed (https://bugzilla.redhat.com/):
2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method 2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service
- JIRA issues fixed (https://issues.jboss.org/):
LOG-3397 - [Developer Console] "parse error" when testing with normal user
LOG-3441 - [Administrator Console] Seeing "parse error" while using Severity filter for cluster view user
LOG-3463 - [release-5.6] ElasticsearchError error="400 - Rejected by Elasticsearch" when adding some labels in application namespaces
LOG-3477 - [Logging 5.6.0]CLF raises 'invalid: unrecognized outputs: [default]' after adding default to outputRefs.
LOG-3494 - [release-5.6] After querying logs in loki, compactor pod raises many TLS handshake error if retention policy is enabled.
LOG-3496 - [release-5.6] LokiStack status is still 'Pending' when all loki components are running
LOG-3510 - [release-5.6] TLS errors on Loki controller pod due to bad certificate
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift API for Data Protection (OADP) 1.1.2 security and bug fix update Advisory ID: RHSA-2023:1174-01 Product: OpenShift API for Data Protection Advisory URL: https://access.redhat.com/errata/RHSA-2023:1174 Issue date: 2023-03-09 CVE Names: CVE-2021-46848 CVE-2022-1122 CVE-2022-1304 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2879 CVE-2022-2880 CVE-2022-2953 CVE-2022-4415 CVE-2022-4883 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-25308 CVE-2022-25309 CVE-2022-25310 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-30293 CVE-2022-35737 CVE-2022-40303 CVE-2022-40304 CVE-2022-41715 CVE-2022-41717 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 CVE-2022-44617 CVE-2022-46285 CVE-2022-47629 CVE-2022-48303 =====================================================================
- Summary:
OpenShift API for Data Protection (OADP) 1.1.2 is now available.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes.
Security Fix(es) from Bugzilla:
-
golang: archive/tar: unbounded memory consumption when reading headers (CVE-2022-2879)
-
golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters (CVE-2022-2880)
-
golang: regexp/syntax: limit memory used by parsing regexps (CVE-2022-41715)
-
golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
OADP-1056 - DPA fails validation if multiple BSLs have the same provider OADP-1150 - Handle docker env config changes in the oadp-operator OADP-1217 - update velero + restic to 1.9.5 OADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed OADP-1289 - Restore partially fails with error "Secrets \"deployer-token-rrjqx\" not found" OADP-290 - Remove creation/usage of velero-privileged SCC
- References:
https://access.redhat.com/security/cve/CVE-2021-46848 https://access.redhat.com/security/cve/CVE-2022-1122 https://access.redhat.com/security/cve/CVE-2022-1304 https://access.redhat.com/security/cve/CVE-2022-2056 https://access.redhat.com/security/cve/CVE-2022-2057 https://access.redhat.com/security/cve/CVE-2022-2058 https://access.redhat.com/security/cve/CVE-2022-2519 https://access.redhat.com/security/cve/CVE-2022-2520 https://access.redhat.com/security/cve/CVE-2022-2521 https://access.redhat.com/security/cve/CVE-2022-2867 https://access.redhat.com/security/cve/CVE-2022-2868 https://access.redhat.com/security/cve/CVE-2022-2869 https://access.redhat.com/security/cve/CVE-2022-2879 https://access.redhat.com/security/cve/CVE-2022-2880 https://access.redhat.com/security/cve/CVE-2022-2953 https://access.redhat.com/security/cve/CVE-2022-4415 https://access.redhat.com/security/cve/CVE-2022-4883 https://access.redhat.com/security/cve/CVE-2022-22624 https://access.redhat.com/security/cve/CVE-2022-22628 https://access.redhat.com/security/cve/CVE-2022-22629 https://access.redhat.com/security/cve/CVE-2022-22662 https://access.redhat.com/security/cve/CVE-2022-25308 https://access.redhat.com/security/cve/CVE-2022-25309 https://access.redhat.com/security/cve/CVE-2022-25310 https://access.redhat.com/security/cve/CVE-2022-26700 https://access.redhat.com/security/cve/CVE-2022-26709 https://access.redhat.com/security/cve/CVE-2022-26710 https://access.redhat.com/security/cve/CVE-2022-26716 https://access.redhat.com/security/cve/CVE-2022-26717 https://access.redhat.com/security/cve/CVE-2022-26719 https://access.redhat.com/security/cve/CVE-2022-27404 https://access.redhat.com/security/cve/CVE-2022-27405 https://access.redhat.com/security/cve/CVE-2022-27406 https://access.redhat.com/security/cve/CVE-2022-30293 https://access.redhat.com/security/cve/CVE-2022-35737 https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/cve/CVE-2022-41715 https://access.redhat.com/security/cve/CVE-2022-41717 https://access.redhat.com/security/cve/CVE-2022-42010 https://access.redhat.com/security/cve/CVE-2022-42011 https://access.redhat.com/security/cve/CVE-2022-42012 https://access.redhat.com/security/cve/CVE-2022-42898 https://access.redhat.com/security/cve/CVE-2022-43680 https://access.redhat.com/security/cve/CVE-2022-44617 https://access.redhat.com/security/cve/CVE-2022-46285 https://access.redhat.com/security/cve/CVE-2022-47629 https://access.redhat.com/security/cve/CVE-2022-48303 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.7.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/
Security updates:
- CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements
- CVE-2023-22467 luxon: Inefficient regular expression complexity in luxon.js
- CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function
- CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
Bug addressed:
-
ACM 2.7 images (BZ# 2116459)
-
Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation, which will be updated shortly for this release, for important instructions on installing this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing
- Bugs fixed (https://bugzilla.redhat.com/):
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2116459 - RHACM 2.7.0 images 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements 2159959 - CVE-2023-22467 luxon: Inefficient regular expression complexity in luxon.js
- JIRA issues fixed (https://issues.jboss.org/):
MTA-103 - MTA 6.0.1 Installation failed with CrashLoop Error for UI Pod MTA-106 - Implement ability for windup addon image pull policy to be configurable MTA-122 - MTA is upgrading automatically ignoring 'Manual' setting MTA-123 - MTA Becomes unusable when running bulk binary analysis MTA-127 - After upgrading MTA operator from 6.0.0 to 6.0.1 and running analysis , task pods starts failing MTA-131 - Analysis stops working after MTA upgrade from 6.0.0 to 6.0.1 MTA-36 - Can't disable a proxy if it has an invalid configuration MTA-44 - Make RWX volumes optional. MTA-49 - Uploaded a local binary when return back to the page the UI should show green bar and correct % MTA-59 - Getting error 401 if deleting many credentials quickly MTA-65 - Set windup addon image pull policy to be controlled by the global image_pull_policy parameter MTA-72 - CVE-2022-46175 mta-ui-container: json5: Prototype Pollution in JSON5 via Parse Method [mta-6] MTA-73 - CVE-2022-37601 mta-ui-container: loader-utils: prototype pollution in function parseQuery in parseQuery.js [mta-6] MTA-74 - CVE-2020-36567 mta-windup-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-76 - CVE-2022-37603 mta-ui-container: loader-utils:Regular expression denial of service [mta-6] MTA-77 - CVE-2020-36567 mta-hub-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-80 - CVE-2021-35065 mta-ui-container: glob-parent: Regular Expression Denial of Service [mta-6] MTA-82 - CVE-2022-42920 org.jboss.windup-windup-cli-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0] MTA-85 - CVE-2022-24999 mta-ui-container: express: "qs" prototype poisoning causes the hang of the node process [mta-6] MTA-88 - CVE-2020-36567 mta-admin-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-92 - CVE-2022-42920 org.jboss.windup.plugin-windup-maven-plugin-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0] MTA-96 - [UI] Maven -> "Local artifact repository" textbox can be checked and has no tooltip
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-12-13-8 watchOS 9.2
watchOS 9.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213536.
Accounts Available for: Apple Watch Series 4 and later Impact: A user may be able to view sensitive user information Description: This issue was addressed with improved data protection. CVE-2022-42843: Mickey Jin (@patch1t)
AppleAVD Available for: Apple Watch Series 4 and later Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov
AppleMobileFileIntegrity Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by enabling hardened runtime. CVE-2022-42865: Wojciech Reguła (@_r3ggi) of SecuRing
CoreServices Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: Multiple issues were addressed by removing the vulnerable code. CVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of Offensive Security
ImageIO Available for: Apple Watch Series 4 and later Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46693: Mickey Jin (@patch1t)
IOHIDFamily Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)
IOMobileFrameBuffer Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46690: John Aakerblom (@jaakerblom)
iTunes Store Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An issue existed in the parsing of URLs. This issue was addressed with improved input validation. CVE-2022-42837: an anonymous researcher
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero
Kernel Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause kernel code execution Description: The issue was addressed with improved memory handling. CVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year Lab
Kernel Available for: Apple Watch Series 4 and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42845: Adam Doupé of ASU SEFCOM
libxml2 Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2022-40303: Maddie Stone of Google Project Zero
libxml2 Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project Zero
Safari Available for: Apple Watch Series 4 and later Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. This issue was addressed with improved input validation. CVE-2022-46695: KirtiKumar Anandrao Ramchandani
Software Update Available for: Apple Watch Series 4 and later Impact: A user may be able to elevate privileges Description: An access issue existed with privileged API calls. This issue was addressed with additional restrictions. CVE-2022-42849: Mickey Jin (@patch1t)
Weather Available for: Apple Watch Series 4 and later Impact: An app may be able to read sensitive location information Description: The issue was addressed with improved handling of caches. CVE-2022-42866: an anonymous researcher
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 245521 CVE-2022-42867: Maddie Stone of Google Project Zero
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 246942 CVE-2022-46696: Samuel Groß of Google V8 Security WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved checks. CVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs & DNSLab, Korea Univ.
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 247420 CVE-2022-46699: Samuel Groß of Google V8 Security WebKit Bugzilla: 244622 CVE-2022-42863: an anonymous researcher
Additional recognition
Kernel We would like to acknowledge Zweig of Kunlun Lab for their assistance.
Safari Extensions We would like to acknowledge Oliver Dunk and Christian R. of 1Password for their assistance.
WebKit We would like to acknowledge an anonymous researcher and scarlet for their assistance.
Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke NxlyKA//eeU/txeqNxHM7JQE6xFrlla1tinQYMjbLhMgzdTbKpPjX8aHVqFfLB/Q 5nH+NqrGs4HQwNQJ6fSiBIId0th71mgX7W3Noa1apzFh7Okl6IehczkAFB9OH7ve vnwiEECGU0hUNmbIi0s9HuuBo6eSNPFsJt0Jqn8ovV+F9bc+ftl/IRv6q2vg3rl3 DNag62BCmCN4uXmqoJ4CKg7cNbddvma0bDbB1yYujxdmFwm4JGN6aittXE3WtPK2 GH2/UxdZll8FR7Zegh1ziUcTaLR4dwHlXRFgc6WC8hqx6T8imNh1heAPwzhT+Iag piObDoMs7UYFKF/eQ8LUcl4hX8IOdLFO5I+BcvCzOcKqHutPqbE8QRU9yqjcQlsJ sOV7GT9W9J+QhibpIJbLVkkQp5djPZ8mLP0OKiRN1quEDWMrquPdM+r9ftJwEIki PLL/ur9c7geXCJCLzglMSMkNcoGZk77qzfJuPdoE0lD6zjdvBHalF5j8S0a1+9gi ex3zU1I+ixqg7CvLNfkSjLcO9KOoPEFHnqEFrrO17QWWyraugrPgV0dMYArGRBpA FofYP6bXLv8eSUNuyOoQxF6kS4ChYgLUabl2NYqop9LoRWAtDAclTiabuvDJPfqA W09wxdhbpp2saxt8LlQjffzOmHJST6oHhHZiFiFswRM0q0nue6I= =DltD -----END PGP SIGNATURE-----
. Bugs fixed (https://bugzilla.redhat.com/):
2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents 2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202210-0997",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "manageability sdk",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "libxml2",
"scope": "lt",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.10.3"
},
{
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "9.2"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.2"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "tvos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "16.2"
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.2"
},
{
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0"
},
{
"model": "active iq unified manager",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "snapmanager",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "ipados",
"scope": null,
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": null
},
{
"model": "macos",
"scope": null,
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": null
},
{
"model": "ios",
"scope": null,
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": null
},
{
"model": "h410c",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "watchos",
"scope": "eq",
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": "9.2"
},
{
"model": "ontap select deploy administration utility",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "manageability sdk",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "clustered data ontap antivirus connector",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "ontap",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "libxml2",
"scope": null,
"trust": 0.8,
"vendor": "xmlsoft",
"version": null
},
{
"model": "tvos",
"scope": null,
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "2.10.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vsphere:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:hyper-v:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:netapp_manageability_sdk:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.7.2",
"versionStartIncluding": "11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "16.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.7.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.7.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.6.2",
"versionStartIncluding": "12.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "170956"
},
{
"db": "PACKETSTORM",
"id": "170955"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170899"
},
{
"db": "PACKETSTORM",
"id": "171144"
},
{
"db": "PACKETSTORM",
"id": "171040"
}
],
"trust": 0.6
},
"cve": "CVE-2022-40303",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.5,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2022-40303",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-40303",
"trust": 1.8,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "An issue was discovered in libxml2 before 2.10.3. When parsing a multi-gigabyte XML document with the XML_PARSE_HUGE parser option enabled, several integer counters can overflow. This results in an attempt to access an array at a negative 2GB offset, typically leading to a segmentation fault. xmlsoft.org of libxml2 Products from other vendors contain integer overflow vulnerabilities.Service operation interruption (DoS) It may be in a state. libxml2 is an open source library for parsing XML documents. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. \n\nCVE-2022-40304\n\n Ned Williamson and Nathan Wachholz discovered a vulnerability when\n handling detection of entity reference cycles, which may result in\n corrupted dictionary entries. This flaw may lead to logic errors,\n including memory errors like double free flaws. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.9.10+dfsg-6.7+deb11u3. \n\nWe recommend that you upgrade your libxml2 packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-39\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: libxml2: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #877149\n ID: 202210-39\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in libxml2, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n==========\n\nlibxml2 is the XML C parser and toolkit developed for the GNOME project. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/libxml2 \u003c 2.10.3 \u003e= 2.10.3\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in libxml2. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll libxml2 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/libxml2-2.10.3\"\n\nReferences\n==========\n\n[ 1 ] CVE-2022-40303\n https://nvd.nist.gov/vuln/detail/CVE-2022-40303\n[ 2 ] CVE-2022-40304\n https://nvd.nist.gov/vuln/detail/CVE-2022-40304\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-39\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. Description:\n\nVersion 1.27.0 of the OpenShift Serverless Operator is supported on Red Hat\nOpenShift Container Platform versions 4.8, 4.9, 4.10, 4.11 and 4.12. \n\nThis release includes security and bug fixes, and enhancements. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method\n2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3397 - [Developer Console] \"parse error\" when testing with normal user\nLOG-3441 - [Administrator Console] Seeing \"parse error\" while using Severity filter for cluster view user\nLOG-3463 - [release-5.6] ElasticsearchError error=\"400 - Rejected by Elasticsearch\" when adding some labels in application namespaces\nLOG-3477 - [Logging 5.6.0]CLF raises \u0027invalid: unrecognized outputs: [default]\u0027 after adding `default` to outputRefs. \nLOG-3494 - [release-5.6] After querying logs in loki, compactor pod raises many TLS handshake error if retention policy is enabled. \nLOG-3496 - [release-5.6] LokiStack status is still \u0027Pending\u0027 when all loki components are running\nLOG-3510 - [release-5.6] TLS errors on Loki controller pod due to bad certificate\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift API for Data Protection (OADP) 1.1.2 security and bug fix update\nAdvisory ID: RHSA-2023:1174-01\nProduct: OpenShift API for Data Protection\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:1174\nIssue date: 2023-03-09\nCVE Names: CVE-2021-46848 CVE-2022-1122 CVE-2022-1304 \n CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 \n CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 \n CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 \n CVE-2022-2879 CVE-2022-2880 CVE-2022-2953 \n CVE-2022-4415 CVE-2022-4883 CVE-2022-22624 \n CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 \n CVE-2022-25308 CVE-2022-25309 CVE-2022-25310 \n CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 \n CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 \n CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 \n CVE-2022-30293 CVE-2022-35737 CVE-2022-40303 \n CVE-2022-40304 CVE-2022-41715 CVE-2022-41717 \n CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 \n CVE-2022-42898 CVE-2022-43680 CVE-2022-44617 \n CVE-2022-46285 CVE-2022-47629 CVE-2022-48303 \n=====================================================================\n\n1. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.2 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. \n\nSecurity Fix(es) from Bugzilla:\n\n* golang: archive/tar: unbounded memory consumption when reading headers\n(CVE-2022-2879)\n\n* golang: net/http/httputil: ReverseProxy should not forward unparseable\nquery parameters (CVE-2022-2880)\n\n* golang: regexp/syntax: limit memory used by parsing regexps\n(CVE-2022-41715)\n\n* golang: net/http: An attacker can cause excessive memory growth in a Go\nserver accepting HTTP/2 requests (CVE-2022-41717)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-1056 - DPA fails validation if multiple BSLs have the same provider\nOADP-1150 - Handle docker env config changes in the oadp-operator\nOADP-1217 - update velero + restic to 1.9.5\nOADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed\nOADP-1289 - Restore partially fails with error \"Secrets \\\"deployer-token-rrjqx\\\" not found\"\nOADP-290 - Remove creation/usage of velero-privileged SCC\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-46848\nhttps://access.redhat.com/security/cve/CVE-2022-1122\nhttps://access.redhat.com/security/cve/CVE-2022-1304\nhttps://access.redhat.com/security/cve/CVE-2022-2056\nhttps://access.redhat.com/security/cve/CVE-2022-2057\nhttps://access.redhat.com/security/cve/CVE-2022-2058\nhttps://access.redhat.com/security/cve/CVE-2022-2519\nhttps://access.redhat.com/security/cve/CVE-2022-2520\nhttps://access.redhat.com/security/cve/CVE-2022-2521\nhttps://access.redhat.com/security/cve/CVE-2022-2867\nhttps://access.redhat.com/security/cve/CVE-2022-2868\nhttps://access.redhat.com/security/cve/CVE-2022-2869\nhttps://access.redhat.com/security/cve/CVE-2022-2879\nhttps://access.redhat.com/security/cve/CVE-2022-2880\nhttps://access.redhat.com/security/cve/CVE-2022-2953\nhttps://access.redhat.com/security/cve/CVE-2022-4415\nhttps://access.redhat.com/security/cve/CVE-2022-4883\nhttps://access.redhat.com/security/cve/CVE-2022-22624\nhttps://access.redhat.com/security/cve/CVE-2022-22628\nhttps://access.redhat.com/security/cve/CVE-2022-22629\nhttps://access.redhat.com/security/cve/CVE-2022-22662\nhttps://access.redhat.com/security/cve/CVE-2022-25308\nhttps://access.redhat.com/security/cve/CVE-2022-25309\nhttps://access.redhat.com/security/cve/CVE-2022-25310\nhttps://access.redhat.com/security/cve/CVE-2022-26700\nhttps://access.redhat.com/security/cve/CVE-2022-26709\nhttps://access.redhat.com/security/cve/CVE-2022-26710\nhttps://access.redhat.com/security/cve/CVE-2022-26716\nhttps://access.redhat.com/security/cve/CVE-2022-26717\nhttps://access.redhat.com/security/cve/CVE-2022-26719\nhttps://access.redhat.com/security/cve/CVE-2022-27404\nhttps://access.redhat.com/security/cve/CVE-2022-27405\nhttps://access.redhat.com/security/cve/CVE-2022-27406\nhttps://access.redhat.com/security/cve/CVE-2022-30293\nhttps://access.redhat.com/security/cve/CVE-2022-35737\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/cve/CVE-2022-41715\nhttps://access.redhat.com/security/cve/CVE-2022-41717\nhttps://access.redhat.com/security/cve/CVE-2022-42010\nhttps://access.redhat.com/security/cve/CVE-2022-42011\nhttps://access.redhat.com/security/cve/CVE-2022-42012\nhttps://access.redhat.com/security/cve/CVE-2022-42898\nhttps://access.redhat.com/security/cve/CVE-2022-43680\nhttps://access.redhat.com/security/cve/CVE-2022-44617\nhttps://access.redhat.com/security/cve/CVE-2022-46285\nhttps://access.redhat.com/security/cve/CVE-2022-47629\nhttps://access.redhat.com/security/cve/CVE-2022-48303\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.7.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/\n\nSecurity updates:\n\n* CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML\nresponses containing multiple Assertion elements\n* CVE-2023-22467 luxon: Inefficient regular expression complexity in\nluxon.js\n* CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\nBug addressed:\n\n* ACM 2.7 images (BZ# 2116459)\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation, which will be updated shortly for this release, for\nimportant\ninstructions on installing this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2116459 - RHACM 2.7.0 images\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements\n2159959 - CVE-2023-22467 luxon: Inefficient regular expression complexity in luxon.js\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nMTA-103 - MTA 6.0.1 Installation failed with CrashLoop Error for UI Pod\nMTA-106 - Implement ability for windup addon image pull policy to be configurable\nMTA-122 - MTA is upgrading automatically ignoring \u0027Manual\u0027 setting\nMTA-123 - MTA Becomes unusable when running bulk binary analysis\nMTA-127 - After upgrading MTA operator from 6.0.0 to 6.0.1 and running analysis , task pods starts failing \nMTA-131 - Analysis stops working after MTA upgrade from 6.0.0 to 6.0.1\nMTA-36 - Can\u0027t disable a proxy if it has an invalid configuration\nMTA-44 - Make RWX volumes optional. \nMTA-49 - Uploaded a local binary when return back to the page the UI should show green bar and correct %\nMTA-59 - Getting error 401 if deleting many credentials quickly\nMTA-65 - Set windup addon image pull policy to be controlled by the global image_pull_policy parameter\nMTA-72 - CVE-2022-46175 mta-ui-container: json5: Prototype Pollution in JSON5 via Parse Method [mta-6]\nMTA-73 - CVE-2022-37601 mta-ui-container: loader-utils: prototype pollution in function parseQuery in parseQuery.js [mta-6]\nMTA-74 - CVE-2020-36567 mta-windup-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-76 - CVE-2022-37603 mta-ui-container: loader-utils:Regular expression denial of service [mta-6]\nMTA-77 - CVE-2020-36567 mta-hub-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-80 - CVE-2021-35065 mta-ui-container: glob-parent: Regular Expression Denial of Service [mta-6]\nMTA-82 - CVE-2022-42920 org.jboss.windup-windup-cli-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0]\nMTA-85 - CVE-2022-24999 mta-ui-container: express: \"qs\" prototype poisoning causes the hang of the node process [mta-6]\nMTA-88 - CVE-2020-36567 mta-admin-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-92 - CVE-2022-42920 org.jboss.windup.plugin-windup-maven-plugin-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0]\nMTA-96 - [UI] Maven -\u003e \"Local artifact repository\" textbox can be checked and has no tooltip\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-8 watchOS 9.2\n\nwatchOS 9.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213536. \n\nAccounts\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to view sensitive user information\nDescription: This issue was addressed with improved data protection. \nCVE-2022-42843: Mickey Jin (@patch1t)\n\nAppleAVD\nAvailable for: Apple Watch Series 4 and later\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAppleMobileFileIntegrity\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2022-42865: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nCoreServices\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: Multiple issues were addressed by removing the\nvulnerable code. \nCVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of\nOffensive Security\n\nImageIO\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46693: Mickey Jin (@patch1t)\n\nIOHIDFamily\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\nIOMobileFrameBuffer\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46690: John Aakerblom (@jaakerblom)\n\niTunes Store\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An issue existed in the parsing of URLs. This issue was\naddressed with improved input validation. \nCVE-2022-42837: an anonymous researcher\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause kernel code execution\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42845: Adam Doup\u00e9 of ASU SEFCOM\n\nlibxml2\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2022-40303: Maddie Stone of Google Project Zero\n\nlibxml2\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project\nZero\n\nSafari\nAvailable for: Apple Watch Series 4 and later\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. This\nissue was addressed with improved input validation. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nSoftware Update\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to elevate privileges\nDescription: An access issue existed with privileged API calls. This\nissue was addressed with additional restrictions. \nCVE-2022-42849: Mickey Jin (@patch1t)\n\nWeather\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to read sensitive location information\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-42866: an anonymous researcher\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 245521\nCVE-2022-42867: Maddie Stone of Google Project Zero\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 246942\nCVE-2022-46696: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs\n\u0026 DNSLab, Korea Univ. \n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 247420\nCVE-2022-46699: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 244622\nCVE-2022-42863: an anonymous researcher\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Zweig of Kunlun Lab for their\nassistance. \n\nSafari Extensions\nWe would like to acknowledge Oliver Dunk and Christian R. of\n1Password for their assistance. \n\nWebKit\nWe would like to acknowledge an anonymous researcher and scarlet for\ntheir assistance. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641 To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\". Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke\nNxlyKA//eeU/txeqNxHM7JQE6xFrlla1tinQYMjbLhMgzdTbKpPjX8aHVqFfLB/Q\n5nH+NqrGs4HQwNQJ6fSiBIId0th71mgX7W3Noa1apzFh7Okl6IehczkAFB9OH7ve\nvnwiEECGU0hUNmbIi0s9HuuBo6eSNPFsJt0Jqn8ovV+F9bc+ftl/IRv6q2vg3rl3\nDNag62BCmCN4uXmqoJ4CKg7cNbddvma0bDbB1yYujxdmFwm4JGN6aittXE3WtPK2\nGH2/UxdZll8FR7Zegh1ziUcTaLR4dwHlXRFgc6WC8hqx6T8imNh1heAPwzhT+Iag\npiObDoMs7UYFKF/eQ8LUcl4hX8IOdLFO5I+BcvCzOcKqHutPqbE8QRU9yqjcQlsJ\nsOV7GT9W9J+QhibpIJbLVkkQp5djPZ8mLP0OKiRN1quEDWMrquPdM+r9ftJwEIki\nPLL/ur9c7geXCJCLzglMSMkNcoGZk77qzfJuPdoE0lD6zjdvBHalF5j8S0a1+9gi\nex3zU1I+ixqg7CvLNfkSjLcO9KOoPEFHnqEFrrO17QWWyraugrPgV0dMYArGRBpA\nFofYP6bXLv8eSUNuyOoQxF6kS4ChYgLUabl2NYqop9LoRWAtDAclTiabuvDJPfqA\nW09wxdhbpp2saxt8LlQjffzOmHJST6oHhHZiFiFswRM0q0nue6I=\n=DltD\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents\n2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40303"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "PACKETSTORM",
"id": "169732"
},
{
"db": "PACKETSTORM",
"id": "169620"
},
{
"db": "PACKETSTORM",
"id": "170956"
},
{
"db": "PACKETSTORM",
"id": "170955"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170899"
},
{
"db": "PACKETSTORM",
"id": "171144"
},
{
"db": "PACKETSTORM",
"id": "170318"
},
{
"db": "PACKETSTORM",
"id": "171040"
}
],
"trust": 2.52
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-429429",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-40303",
"trust": 3.6
},
{
"db": "JVN",
"id": "JVNVU93250330",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU99836374",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-24-102-08",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-24-165-04",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-24-165-10",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-24-165-06",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023015",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170318",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169620",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170899",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170955",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169732",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171040",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170317",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170316",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170753",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169857",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171016",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169825",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170555",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171173",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171043",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170752",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170312",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169858",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170097",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171042",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171017",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170754",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170315",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171260",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1031",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-429429",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170956",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171310",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171144",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "PACKETSTORM",
"id": "169732"
},
{
"db": "PACKETSTORM",
"id": "169620"
},
{
"db": "PACKETSTORM",
"id": "170956"
},
{
"db": "PACKETSTORM",
"id": "170955"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170899"
},
{
"db": "PACKETSTORM",
"id": "171144"
},
{
"db": "PACKETSTORM",
"id": "170318"
},
{
"db": "PACKETSTORM",
"id": "171040"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"id": "VAR-202210-0997",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T20:33:29.996000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "HT213535",
"trust": 0.8,
"url": "https://security.netapp.com/advisory/ntap-20221209-0003/"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-190",
"trust": 1.1
},
{
"problemtype": "Integer overflow or wraparound (CWE-190) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.9,
"url": "http://seclists.org/fulldisclosure/2022/dec/21"
},
{
"trust": 1.9,
"url": "http://seclists.org/fulldisclosure/2022/dec/24"
},
{
"trust": 1.9,
"url": "http://seclists.org/fulldisclosure/2022/dec/25"
},
{
"trust": 1.9,
"url": "http://seclists.org/fulldisclosure/2022/dec/26"
},
{
"trust": 1.9,
"url": "http://seclists.org/fulldisclosure/2022/dec/27"
},
{
"trust": 1.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20221209-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213531"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213533"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213534"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213535"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213536"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/commit/c846986356fc149915a74972bf198abc266bc2c0"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/tags/v2.10.3"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99836374/index.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu93250330/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-102-08"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-04"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-06"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-10"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-42011"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-42012"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-46848"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-35737"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-43680"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-42010"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
},
{
"trust": 0.3,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-21835"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-21843"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2880"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-4883"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46175"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42010"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-44617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46285"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43680"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35737"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42011"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2953"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2869"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2058"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2057"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2058"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-41717"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2521"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2519"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2056"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2056"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2868"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2520"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2867"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2519"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2057"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23521"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-41903"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23521"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/libxml2"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/glsa/202210-39"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27664"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-46175"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3821"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-46285"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3821"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0634"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-44617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4415"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1174"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2521"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2520"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1122"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1122"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-22467"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41912"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0630"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-22467"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3517"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41912"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42920"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36567"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-21830"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36567"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867"
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42849"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42842"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42866"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42845"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42865"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42843"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
},
{
"trust": 0.1,
"url": "https://support.apple.com/kb/ht204641"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213536."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42837"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42859"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4238"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3064"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-47629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3064"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0802"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23947"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "PACKETSTORM",
"id": "169732"
},
{
"db": "PACKETSTORM",
"id": "169620"
},
{
"db": "PACKETSTORM",
"id": "170956"
},
{
"db": "PACKETSTORM",
"id": "170955"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170899"
},
{
"db": "PACKETSTORM",
"id": "171144"
},
{
"db": "PACKETSTORM",
"id": "170318"
},
{
"db": "PACKETSTORM",
"id": "171040"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"db": "PACKETSTORM",
"id": "169732"
},
{
"db": "PACKETSTORM",
"id": "169620"
},
{
"db": "PACKETSTORM",
"id": "170956"
},
{
"db": "PACKETSTORM",
"id": "170955"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170899"
},
{
"db": "PACKETSTORM",
"id": "171144"
},
{
"db": "PACKETSTORM",
"id": "170318"
},
{
"db": "PACKETSTORM",
"id": "171040"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-11-23T00:00:00",
"db": "VULHUB",
"id": "VHN-429429"
},
{
"date": "2023-11-24T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"date": "2022-11-07T15:19:42",
"db": "PACKETSTORM",
"id": "169732"
},
{
"date": "2022-11-01T13:29:06",
"db": "PACKETSTORM",
"id": "169620"
},
{
"date": "2023-02-10T15:49:15",
"db": "PACKETSTORM",
"id": "170956"
},
{
"date": "2023-02-10T15:48:32",
"db": "PACKETSTORM",
"id": "170955"
},
{
"date": "2023-03-09T15:14:10",
"db": "PACKETSTORM",
"id": "171310"
},
{
"date": "2023-02-08T16:02:01",
"db": "PACKETSTORM",
"id": "170899"
},
{
"date": "2023-02-28T16:03:55",
"db": "PACKETSTORM",
"id": "171144"
},
{
"date": "2022-12-22T02:13:22",
"db": "PACKETSTORM",
"id": "170318"
},
{
"date": "2023-02-17T16:01:57",
"db": "PACKETSTORM",
"id": "171040"
},
{
"date": "2022-11-23T00:15:11.007000",
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-01-11T00:00:00",
"db": "VULHUB",
"id": "VHN-429429"
},
{
"date": "2024-06-17T07:14:00",
"db": "JVNDB",
"id": "JVNDB-2022-023015"
},
{
"date": "2023-11-07T03:52:15.280000",
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "xmlsoft.org\u00a0 of \u00a0libxml2\u00a0 Integer overflow vulnerability in products from other vendors",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-023015"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "arbitrary, code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "169620"
}
],
"trust": 0.1
}
}
VAR-202112-2255
Vulnerability from variot - Updated: 2024-07-23 20:30In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn't properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses. Linux Kernel Exists in the use of cryptographic algorithms.Information may be obtained. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel security, bug fix, and enhancement update Advisory ID: RHSA-2022:1988-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1988 Issue date: 2022-05-10 CVE Names: CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3669 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 =====================================================================
- Summary:
An update for kernel is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64
Security Fix(es):
-
kernel: fget: check that the fd still exists after getting a ref to it (CVE-2021-4083)
-
kernel: avoid cyclic entity chains due to malformed USB descriptors (CVE-2020-0404)
-
kernel: speculation on incompletely validated data on IBM Power9 (CVE-2020-4788)
-
kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c (CVE-2020-13974)
-
kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free (CVE-2021-0941)
-
kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP() (CVE-2021-3612)
-
kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts (CVE-2021-3669)
-
kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c (CVE-2021-3743)
-
kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd() (CVE-2021-3744)
-
kernel: possible use-after-free in bluetooth module (CVE-2021-3752)
-
kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks (CVE-2021-3759)
-
kernel: DoS in ccp_run_aes_gcm_cmd() function (CVE-2021-3764)
-
kernel: sctp: Invalid chunks may be used to remotely remove existing associations (CVE-2021-3772)
-
kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients (CVE-2021-3773)
-
kernel: possible leak or coruption of data residing on hugetlbfs (CVE-2021-4002)
-
kernel: security regression for CVE-2018-13405 (CVE-2021-4037)
-
kernel: Buffer overwrite in decode_nfs_fh function (CVE-2021-4157)
-
kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)
-
kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)
-
kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies (CVE-2021-20322)
-
kernel: arm: SIGPAGE information disclosure vulnerability (CVE-2021-21781)
-
hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 (CVE-2021-26401)
-
kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation (CVE-2021-29154)
-
kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c (CVE-2021-37159)
-
kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write (CVE-2021-41864)
-
kernel: Heap buffer overflow in firedtv driver (CVE-2021-42739)
-
kernel: ppc: kvm: allows a malicious KVM guest to crash the host (CVE-2021-43056)
-
kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c (CVE-2021-43389)
-
kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device (CVE-2021-43976)
-
kernel: use-after-free in the TEE subsystem (CVE-2021-44733)
-
kernel: information leak in the IPv6 implementation (CVE-2021-45485)
-
kernel: information leak in the IPv4 implementation (CVE-2021-45486)
-
hw: cpu: intel: Branch History Injection (BHI) (CVE-2022-0001)
-
hw: cpu: intel: Intra-Mode BTI (CVE-2022-0002)
-
kernel: Local denial of service in bond_ipsec_add_sa (CVE-2022-0286)
-
kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c (CVE-2022-0322)
-
kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes (CVE-2022-1011)
-
kernel: use-after-free in nouveau kernel module (CVE-2020-27820)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1888433 - CVE-2020-4788 kernel: speculation on incompletely validated data on IBM Power9 1901726 - CVE-2020-27820 kernel: use-after-free in nouveau kernel module 1919791 - CVE-2020-0404 kernel: avoid cyclic entity chains due to malformed USB descriptors 1946684 - CVE-2021-29154 kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation 1951739 - CVE-2021-42739 kernel: Heap buffer overflow in firedtv driver 1957375 - [RFE] x86, tsc: Add kcmdline args for skipping tsc calibration sequences 1974079 - CVE-2021-3612 kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP() 1981950 - CVE-2021-21781 kernel: arm: SIGPAGE information disclosure vulnerability 1983894 - Hostnetwork pod to service backed by hostnetwork on the same node is not working with OVN Kubernetes 1985353 - CVE-2021-37159 kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c 1986473 - CVE-2021-3669 kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts 1994390 - FIPS: deadlock between PID 1 and "modprobe crypto-jitterentropy_rng" at boot, preventing system to boot 1997338 - block: update to upstream v5.14 1997467 - CVE-2021-3764 kernel: DoS in ccp_run_aes_gcm_cmd() function 1997961 - CVE-2021-3743 kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c 1999544 - CVE-2021-3752 kernel: possible use-after-free in bluetooth module 1999675 - CVE-2021-3759 kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks 2000627 - CVE-2021-3744 kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd() 2000694 - CVE-2021-3772 kernel: sctp: Invalid chunks may be used to remotely remove existing associations 2004949 - CVE-2021-3773 kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients 2009312 - Incorrect system time reported by the cpu guest statistics (PPC only). 2009521 - XFS: sync to upstream v5.11 2010463 - CVE-2021-41864 kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write 2011104 - statfs reports wrong free space for small quotas 2013180 - CVE-2021-43389 kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c 2014230 - CVE-2021-20322 kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies 2015525 - SCTP peel-off with SELinux and containers in OCP 2015755 - zram: zram leak with warning when running zram02.sh in ltp 2016169 - CVE-2020-13974 kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c 2017073 - CVE-2021-43056 kernel: ppc: kvm: allows a malicious KVM guest to crash the host 2017796 - ceph omnibus backport for RHEL-8.6.0 2018205 - CVE-2021-0941 kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free 2022814 - Rebase the input and HID stack in 8.6 to v5.15 2025003 - CVE-2021-43976 kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device 2025726 - CVE-2021-4002 kernel: possible leak or coruption of data residing on hugetlbfs 2027239 - CVE-2021-4037 kernel: security regression for CVE-2018-13405 2029923 - CVE-2021-4083 kernel: fget: check that the fd still exists after getting a ref to it 2030476 - Kernel 4.18.0-348.2.1 secpath_cache memory leak involving strongswan tunnel 2030747 - CVE-2021-44733 kernel: use-after-free in the TEE subsystem 2031200 - rename(2) fails on subfolder mounts when the share path has a trailing slash 2034342 - CVE-2021-4157 kernel: Buffer overwrite in decode_nfs_fh function 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2037019 - CVE-2022-0286 kernel: Local denial of service in bond_ipsec_add_sa 2039911 - CVE-2021-45485 kernel: information leak in the IPv6 implementation 2039914 - CVE-2021-45486 kernel: information leak in the IPv4 implementation 2042798 - [RHEL8.6][sfc] General sfc driver update 2042822 - CVE-2022-0322 kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c 2043453 - [RHEL8.6 wireless] stack & drivers general update to v5.16+ 2046021 - kernel 4.18.0-358.el8 async dirops causes write errors with namespace restricted caps 2048251 - Selinux is not allowing SCTP connection setup between inter pod communication in enforcing mode 2061700 - CVE-2021-26401 hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 2061712 - CVE-2022-0001 hw: cpu: intel: Branch History Injection (BHI) 2061721 - CVE-2022-0002 hw: cpu: intel: Intra-Mode BTI 2064855 - CVE-2022-1011 kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes
- Package List:
Red Hat Enterprise Linux BaseOS (v. 8):
Source: kernel-4.18.0-372.9.1.el8.src.rpm
aarch64: bpftool-4.18.0-372.9.1.el8.aarch64.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-4.18.0-372.9.1.el8.aarch64.rpm kernel-core-4.18.0-372.9.1.el8.aarch64.rpm kernel-cross-headers-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-core-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-devel-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-modules-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm kernel-devel-4.18.0-372.9.1.el8.aarch64.rpm kernel-headers-4.18.0-372.9.1.el8.aarch64.rpm kernel-modules-4.18.0-372.9.1.el8.aarch64.rpm kernel-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-libs-4.18.0-372.9.1.el8.aarch64.rpm perf-4.18.0-372.9.1.el8.aarch64.rpm perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm python3-perf-4.18.0-372.9.1.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm
noarch: kernel-abi-stablelists-4.18.0-372.9.1.el8.noarch.rpm kernel-doc-4.18.0-372.9.1.el8.noarch.rpm
ppc64le: bpftool-4.18.0-372.9.1.el8.ppc64le.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-4.18.0-372.9.1.el8.ppc64le.rpm kernel-core-4.18.0-372.9.1.el8.ppc64le.rpm kernel-cross-headers-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-core-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-devel-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-modules-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm kernel-devel-4.18.0-372.9.1.el8.ppc64le.rpm kernel-headers-4.18.0-372.9.1.el8.ppc64le.rpm kernel-modules-4.18.0-372.9.1.el8.ppc64le.rpm kernel-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-libs-4.18.0-372.9.1.el8.ppc64le.rpm perf-4.18.0-372.9.1.el8.ppc64le.rpm perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm python3-perf-4.18.0-372.9.1.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm
s390x: bpftool-4.18.0-372.9.1.el8.s390x.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-4.18.0-372.9.1.el8.s390x.rpm kernel-core-4.18.0-372.9.1.el8.s390x.rpm kernel-cross-headers-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-core-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-devel-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-modules-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.s390x.rpm kernel-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-debuginfo-common-s390x-4.18.0-372.9.1.el8.s390x.rpm kernel-devel-4.18.0-372.9.1.el8.s390x.rpm kernel-headers-4.18.0-372.9.1.el8.s390x.rpm kernel-modules-4.18.0-372.9.1.el8.s390x.rpm kernel-modules-extra-4.18.0-372.9.1.el8.s390x.rpm kernel-tools-4.18.0-372.9.1.el8.s390x.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-core-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-devel-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-modules-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-modules-extra-4.18.0-372.9.1.el8.s390x.rpm perf-4.18.0-372.9.1.el8.s390x.rpm perf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm python3-perf-4.18.0-372.9.1.el8.s390x.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm
x86_64: bpftool-4.18.0-372.9.1.el8.x86_64.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-4.18.0-372.9.1.el8.x86_64.rpm kernel-core-4.18.0-372.9.1.el8.x86_64.rpm kernel-cross-headers-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-core-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-devel-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-modules-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm kernel-devel-4.18.0-372.9.1.el8.x86_64.rpm kernel-headers-4.18.0-372.9.1.el8.x86_64.rpm kernel-modules-4.18.0-372.9.1.el8.x86_64.rpm kernel-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-libs-4.18.0-372.9.1.el8.x86_64.rpm perf-4.18.0-372.9.1.el8.x86_64.rpm perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm python3-perf-4.18.0-372.9.1.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm
Red Hat CodeReady Linux Builder (v. 8):
aarch64: bpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-libs-devel-4.18.0-372.9.1.el8.aarch64.rpm perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm
ppc64le: bpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-libs-devel-4.18.0-372.9.1.el8.ppc64le.rpm perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm
x86_64: bpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-libs-devel-4.18.0-372.9.1.el8.x86_64.rpm perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-0404 https://access.redhat.com/security/cve/CVE-2020-4788 https://access.redhat.com/security/cve/CVE-2020-13974 https://access.redhat.com/security/cve/CVE-2020-27820 https://access.redhat.com/security/cve/CVE-2021-0941 https://access.redhat.com/security/cve/CVE-2021-3612 https://access.redhat.com/security/cve/CVE-2021-3669 https://access.redhat.com/security/cve/CVE-2021-3743 https://access.redhat.com/security/cve/CVE-2021-3744 https://access.redhat.com/security/cve/CVE-2021-3752 https://access.redhat.com/security/cve/CVE-2021-3759 https://access.redhat.com/security/cve/CVE-2021-3764 https://access.redhat.com/security/cve/CVE-2021-3772 https://access.redhat.com/security/cve/CVE-2021-3773 https://access.redhat.com/security/cve/CVE-2021-4002 https://access.redhat.com/security/cve/CVE-2021-4037 https://access.redhat.com/security/cve/CVE-2021-4083 https://access.redhat.com/security/cve/CVE-2021-4157 https://access.redhat.com/security/cve/CVE-2021-4197 https://access.redhat.com/security/cve/CVE-2021-4203 https://access.redhat.com/security/cve/CVE-2021-20322 https://access.redhat.com/security/cve/CVE-2021-21781 https://access.redhat.com/security/cve/CVE-2021-26401 https://access.redhat.com/security/cve/CVE-2021-29154 https://access.redhat.com/security/cve/CVE-2021-37159 https://access.redhat.com/security/cve/CVE-2021-41864 https://access.redhat.com/security/cve/CVE-2021-42739 https://access.redhat.com/security/cve/CVE-2021-43056 https://access.redhat.com/security/cve/CVE-2021-43389 https://access.redhat.com/security/cve/CVE-2021-43976 https://access.redhat.com/security/cve/CVE-2021-44733 https://access.redhat.com/security/cve/CVE-2021-45485 https://access.redhat.com/security/cve/CVE-2021-45486 https://access.redhat.com/security/cve/CVE-2022-0001 https://access.redhat.com/security/cve/CVE-2022-0002 https://access.redhat.com/security/cve/CVE-2022-0286 https://access.redhat.com/security/cve/CVE-2022-0322 https://access.redhat.com/security/cve/CVE-2022-1011 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYnqSF9zjgjWX9erEAQjBXQ/8DSpFUMNN6ZVFtli2KuVowVLS+14J0jtj 0zxpr0skJT8vVulU3VTeURBMdg9NAo9bj3R5KTk2+dC+AMuHET5aoVvaYmimBGKL 5qzpu7q9Z0aaD2I288suHCnYuRJnt+qKZtNa4hlcY92bN0tcYBonxsdIS2xM6xIu GHNS8HNVUNz4PuCBfmbITvgX9Qx+iZQVlVccDBG5LDpVwgOtnrxHKbe5E499v/9M oVoN+eV9ulHAZdCHWlUAahbsvEqDraCKNT0nHq/xO5dprPjAcjeKYMeaICtblRr8 k+IouGywaN+mW4sBjnaaiuw2eAtoXq/wHisX1iUdNkroqcx9NBshWMDBJnE4sxQJ ZOSc8B6yjJItPvUI7eD3BDgoka/mdoyXTrg+9VRrir6vfDHPrFySLDrO1O5HM5fO 3sExCVO2VM7QMCGHJ1zXXX4szk4SV/PRsjEesvHOyR2xTKZZWMsXe1h9gYslbADd tW0yco/G23xjxqOtMKuM/nShBChflMy9apssldiOfdqODJMv5d4rRpt0xgmtSOM6 qReveuQCasmNrGlAHgDwbtWz01fmSuk9eYDhZNmHA3gxhoHIV/y+wr0CLbOQtDxT p79nhiqwUo5VMj/X30Lu0Wl3ptLuhRWamzTCkEEzdubr8aVsT4RRNQU3KfVFfpT1 MWp/2ui3i80= =Fdgy -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements. Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.5.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
-
nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)
-
containerd: Unprivileged pod may bind mount any privileged regular file on disk (CVE-2021-43816)
-
minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)
-
openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)
-
imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path (CVE-2022-24778)
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
-
opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
Bug fixes:
-
RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target (BZ# 2014557)
-
RHACM 2.5.0 images (BZ# 2024938)
-
[UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) (BZ#2028348)
-
Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly (BZ# 2028647)
-
create cluster pool -> choose infra type, As a result infra providers disappear from UI. (BZ# 2033339)
-
Restore/backup shows up as Validation failed but the restore backup status in ACM shows success (BZ# 2034279)
-
Observability - OCP 311 node role are not displayed completely (BZ# 2038650)
-
Documented uninstall procedure leaves many leftovers (BZ# 2041921)
-
infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 (BZ# 2046554)
-
Acm failed to install due to some missing CRDs in operator (BZ# 2047463)
-
Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)
-
ACM home page now includes /home/ in url (BZ# 2051299)
-
proxy heading in Add Credential should be capitalized (BZ# 2051349)
-
ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 (BZ# 2051983)
-
Create Policy button does not work and user cannot use console to create policy (BZ# 2053264)
-
No cluster information was displayed after a policyset was created (BZ# 2053366)
-
Dynamic plugin update does not take effect in Firefox (BZ# 2053516)
-
Replicated policy should not be available when creating a Policy Set (BZ# 2054431)
-
Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement (BZ# 2054433)
-
Bugs fixed (https://bugzilla.redhat.com/):
2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target
2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2028224 - RHACM 2.5.0 images
2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)
2028647 - Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2033339 - create cluster pool -> choose infra type , As a result infra providers disappear from UI.
2034279 - Restore/backup shows up as Validation failed but the restore backup status in ACM shows success
2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API
2038650 - Observability - OCP 311 node role are not displayed completely
2041921 - Documented uninstall procedure leaves many leftovers
2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2046554 - infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5
2047463 - Acm failed to install due to some missing CRDs in operator
2051298 - Navigation icons no longer showing in ACM 2.5
2051299 - ACM home page now includes /home/ in url
2051349 - proxy heading in Add Credential should be capitalized
2051983 - ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0
2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account
2053264 - Create Policy button does not work and user cannot use console to create policy
2053366 - No cluster information was displayed after a policyset was created
2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
2053516 - Dynamic plugin update does not take effect in Firefox
2054431 - Replicated policy should not be available when creating a Policy Set
2054433 - Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement
2054772 - credentialName is not parsed correctly in UI notifications/alerts when creating/updating a discovery config
2054860 - Cluster overview page crashes for on-prem cluster
2055333 - Unable to delete assisted-service operator
2055900 - If MCH is installed on existing MCE and both are in multicluster-engine namespace , uninstalling MCH terminates multicluster-engine namespace
2056485 - [UI] In infraenv detail the host list don't have pagination
2056701 - Non platform install fails agentclusterinstall CRD is outdated in rhacm2.5
2057060 - [CAPI] Unable to create ClusterDeployment due to service account restrictions (ACM + Bundled Assisted)
2058435 - Label cluster.open-cluster-management.io/backup-cluster stamped 'unknown' for velero backups
2059779 - spec.nodeSelector is missing in MCE instance created by MCH upon installing ACM on infra nodes
2059781 - Policy UI crashes when viewing details of configuration policies for backupschedule that does not exist
2060135 - [assisted-install] agentServiceConfig left orphaned after uninstalling ACM
2060151 - Policy set of the same name cannot be re-created after the previous one has been deleted
2060230 - [UI] Delete host modal has incorrect host's name populated
2060309 - multiclusterhub stuck in installing on "ManagedClusterConditionAvailable" [intermittent]
2060469 - The development branch of the Submariner addon deploys 0.11.0, not 0.12.0
2060550 - MCE installation hang due to no console-mce-console deployment available
2060603 - prometheus doesn't display managed clusters
2060831 - Observability - prometheus-operator failed to start on KS
2060934 - Cannot provision AWS OCP 4.9 cluster from Power Hub
2061260 - The value of the policyset placement should be filtered space when input cluster label expression
2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace
2061659 - the network section in create cluster -> Networking include the brace in the network title
2061798 - [ACM 2.5] The service of Cluster Proxy addon was missing
2061838 - ACM component subscriptions are removed when enabling spec.disableHubSelfManagement in MCH
2062009 - No name validation is performed on Policy and Policy Set Wizards
2062022 - cluster.open-cluster-management.io/backup-cluster of velero schedules should populate the corresponding hub clusterID
2062025 - No validation is done on yaml's format or content in Policy and Policy Set wizards
2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates
2062337 - velero schedules get re-created after the backupschedule is in 'BackupCollision' phase
2062462 - Upgrade to 2.5 hang due to irreconcilable errors of grc-sub and search-prod-sub in MCH
2062556 - Always return the policyset page after created the policy from UI
2062787 - Submariner Add-on UI does not indicate on Broker error
2063055 - User with cluserrolebinding of open-cluster-management:cluster-manager-admin role can't see policies and clusters page
2063341 - Release imagesets are missing in the console for ocp 4.10
2063345 - Application Lifecycle- UI shows white blank page when the page is Refreshed
2063596 - claim clusters from clusterpool throws errors
2063599 - Update the message in clusterset -> clusterpool page since we did not allow to add clusterpool to clusterset by resourceassignment
2063697 - Observability - MCOCR reports object-storage secret without AWS access_key in STS enabled env
2064231 - Can not clean the instance type for worker pool when create the clusters
2064247 - prefer UI can add the architecture type when create the cluster
2064392 - multicloud oauth-proxy failed to log users in on web
2064477 - Click at "Edit Policy" for each policy leads to a blank page
2064509 - No option to view the ansible job details and its history in the Automation wizard after creation of the automation job
2064516 - Unable to delete an automation job of a policy
2064528 - Columns of Policy Set, Status and Source on Policy page are not sortable
2064535 - Different messages on the empty pages of Overview and Clusters when policy is disabled
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064722 - [Tracker] [DR][ACM 2.5] Applications are not getting deployed on managed cluster
2064899 - Failed to provision openshift 4.10 on bare metal
2065436 - "Filter" drop-down list does not show entries of the policies that have no top-level remediation specified
2066198 - Issues about disabled policy from UI
2066207 - The new created policy should be always shown up on the first line
2066333 - The message was confuse when the cluster status is Running
2066383 - MCE install failing on proxy disconnected environment
2066433 - Logout not working for ACM 2.5
2066464 - console-mce-console pods throw ImagePullError after upgrading to ocp 4.10
2066475 - User with view-only rolebinding should not be allowed to create policy, policy set and automation job
2066544 - The search box can't work properly in Policies page
2066594 - RFE: Can't open the helm source link of the backup-restore-enabled policy from UI
2066650 - minor issues in cluster curator due to the startup throws errors
2066751 - the image repo of application-manager did not updated to use the image repo in MCE/MCH configuration
2066834 - Hibernating cluster(s) in cluster pool stuck in 'Stopping' status after restore activation
2066842 - cluster pool credentials are not backed up
2066914 - Unable to remove cluster value during configuration of the label expressions for policy and policy set
2066940 - Validation fired out for https proxy when the link provided not starting with https
2066965 - No message is displayed in Policy Wizard to indicate a policy externally managed
2066979 - MIssing groups in policy filter options comparing to previous RHACM version
2067053 - I was not able to remove the image mirror content when create the cluster
2067067 - Can't filter the cluster info when clicked the cluster in the Placement section
2067207 - Bare metal asset secrets are not backed up
2067465 - Categories,Standards, and Controls annotations are not updated after user has deleted a selected template
2067713 - Columns on policy's "Results" are not sort-able as in previous release
2067728 - Can't search in the policy creation or policyset creation Yaml editor
2068304 - Application Lifecycle- Replicasets arent showing the logs console in Topology
2068309 - For policy wizard in dynamics plugin environment, buttons at the bottom should be sticky and the contents of the Policy should scroll
2068312 - Application Lifecycle - Argo Apps are not showing overview details and topology after upgrading from 2.4
2068313 - Application Lifecycle - Refreshing overview page leads to a blank page
2068328 - A cluster's "View history" page should not contain all clusters' violations history
2068387 - Observability - observability operator always CrashLoopBackOff in FIPS upgrading hub
2068993 - Observability - Node list is not filtered according to nodeType on OCP 311 dashboard
2069329 - config-policy-controller addon with "Unknown" status in OCP 3.11 managed cluster after upgrade hub to 2.5
2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path
2069469 - Status of unreachable clusters is not reported in several places on GRC panels
2069615 - The YAML editor can't work well when login UI using dynamic console plugin
2069622 - No validation for policy template's name
2069698 - After claim a cluster from clusterpool, the cluster pages become very very slow
2069867 - Error occurs when trying to edit an application set/subscription
2069870 - ACM/MCE Dynamic Plugins - 404: Page Not Found Error Occurs - intermittent crashing
2069875 - Cluster secrets are not being created in the managed cluster's namespace
2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar
2070203 - Blank Application is shown when editing an Application with AnsibleJobs
2070782 - Failed Secret Propagation to the Same Namespace as the AnsibleJob CR
2070846 - [ACM 2.5] Can't re-add the default clusterset label after removing it from a managedcluster on BM SNO hub
2071066 - Policy set details panel does not work when deployed into namespace different than "default"
2071173 - Configured RunOnce automation job is not displayed although the policy has no violation
2071191 - MIssing title on details panel after clicking "view details" of a policy set card
2071769 - Placement must be always configured or error is reported when creating a policy
2071818 - ACM logo not displayed in About info modal
2071869 - Topology includes the status of local cluster resources when Application is only deployed to managed cluster
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2072097 - Local Cluster is shown as Remote on the Application Overview Page and Single App Overview Page
2072104 - Inconsistent "Not Deployed" Icon Used Between 2.4 and 2.5 as well as the Overview and Topology
2072177 - Cluster Resource Status is showing App Definition Statuses as well
2072227 - Sidebar Statuses Need to Be Updated to Reflect Cluster List and Cluster Resource Statuses
2072231 - Local Cluster not included in the appsubreport for Helm Applications Deployed on All Clusters
2072334 - Redirect URL is now to the details page after created a policy
2072342 - Shows "NaN%" in the ring chart when add the disabled policy into policyset and view its details
2072350 - CRD Deployed via Application Console does not have correct deployment status and spelling
2072359 - Report the error when editing compliance type in the YAML editor and then submit the changes
2072504 - The policy has violations on the failed managed cluster
2072551 - URL dropdown is not being rendered with an Argo App with a new URL
2072773 - When a channel is deleted and recreated through the App Wizard, application creation stalls and warning pops up
2072824 - The edit/delete policyset button should be greyed when using viewer check
2072829 - When Argo App with jsonnet object is deployed, topology and cluster status would fail to display the correct statuses.
2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub
2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO
2073355 - Get blank page when click policy with unknown status in Governance -> Overview page
2073508 - Thread responsible to get insights data from ks clusters is broken
2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters
2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology
2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin
2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of "resource conflict" error
2074178 - Editing Helm Argo Applications does not Prune Old Resources
2074626 - Policy placement failure during ZTP SNO scale test
2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store
2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal
2074937 - UI allows creating cluster even when there are no ClusterImageSets
2075416 - infraEnv failed to create image after restore
2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod
2075739 - The lookup function won't check the referred resource whether exist when using template policies
2076421 - Can't select existing placement for policy or policyset when editing policy or policyset
2076494 - No policyreport CR for spoke clusters generated in the disconnected env
2076502 - The policyset card doesn't show the cluster status(violation/without violation) again after deleted one policy
2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator
2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster
2077291 - Prometheus doesn't display acm_managed_cluster_info after upgrade from 2.4 to 2.5
2077304 - Create Cluster button is disabled only if other clusters exist
2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5
2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI
2077751 - Can't create a template policy from UI when the object's name is referring Golang text template syntax in this policy
2077783 - Still show violation for clusterserviceversions after enforced "Detect Image vulnerabilities " policy template and the operator is installed
2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set
2078164 - Failed to edit a policy without placement
2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement
2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists
2078617 - Azure public credential details get pre-populated with base domain name in UI
2078952 - View pod logs in search details returns error
2078973 - Crashed pod is marked with success in Topology
2079013 - Changing existing placement rules does not change YAML file
2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM
2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on
2079494 - Hitting Enter in yaml editor caused unexpected keys "key00x:" to be created
2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5
2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page
2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted
2079615 - Edit appset placement in UI with a new placement throws error upon submitting
2079658 - Cluster Count is Incorrect in Application UI
2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower
2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters
2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080503 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes
2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page
2080712 - Select an existing placement configuration does not work
2080776 - Unrecognized characters are displayed on policy and policy set yaml editors
2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster
2081810 - Type '-' character in Name field caused previously typed character backspaced in in the name field of policy wizard
2081829 - Application deployed on local cluster's topology is crashing after upgrade
2081938 - The deleted policy still be shown on the policyset review page when edit this policy set
2082226 - Object Storage Topology includes residue of resources after Upgrade
2082409 - Policy set details panel remains even after the policy set has been deleted
2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets
2083038 - Warning still refers to the klusterlet-addon-appmgr pod rather than the application-manager pod
2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated
2083434 - The provider-credential-controller did not support the RHV credentials type
2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace
2083870 - When editing an existing application and refreshing the Select an existing placement configuration, multiple occurrences of the placementrule gets displayed
2084034 - The status message looks messy in the policy set card, suggest one kind status one a row
2084158 - Support provisioning bm cluster where no provisioning network provided
2084622 - Local Helm application shows cluster resources as Not Deployed in Topology [Upgrade]
2085083 - Policies fail to copy to cluster namespace after ACM upgrade
2085237 - Resources referenced by a channel are not annotated with backup label
2085273 - Error querying for ansible job in app topology
2085281 - Template name error is reported but the template name was found in a different replicated policy
2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page
2087515 - Validation thrown out in configuration for disconnect install while creating bm credential
2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]
2088511 - Some cluster resources are not showing labels that are defined in the YAML
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.53. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2022:7873
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html
You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, and ppc64le architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags
The sha values for the release are:
(For x86_64 architecture) The image digest is sha256:ac2bbfa7036c64bbdb44f9a74df3dbafcff1b851d812bf2a48c4fabcac3c7a53
(For s390x architecture) The image digest is sha256:ac2c74a664257cea299126d4f789cdf9a5a4efc4a4e8c2361b943374d4eb21e4
(For ppc64le architecture) The image digest is sha256:53adc42ed30ad39d7117837dbf5a6db6943a8f0b3b61bc0d046b83394f5c28b2
All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2077100 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability
- JIRA issues fixed (https://issues.jboss.org/):
OCPBUGS-2205 - Prefer local dns does not work expectedly on OCPv4.8 OCPBUGS-2347 - [cluster-api-provider-baremetal] fix 4.8 build OCPBUGS-2577 - [4.8] ETCD Operator goes degraded when a second internal node ip is added OCPBUGS-2773 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name OCPBUGS-2989 - [4.8] cri-o should report the stage of container and pod creation it's stuck at
6
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202112-2255",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "all flash fabric-attached storage 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "e-series santricity os controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci compute node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.13.3"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h615c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h610c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core policy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fabric-attached storage 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core network exposure function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"model": "fabric-attached storage a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fabric-attached storage 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "all flash fabric-attached storage 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire \\\u0026 hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire\\, enterprise sds \\\u0026 hci storage node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "aff a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "e-series santricity os controller software",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "hci baseboard management controller h300e",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"model": "fas/aff baseboard management controller a400",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "fas/aff baseboard management controller 8700",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "hci baseboard management controller h410c",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire enterprise sds \u0026 hci storage node",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire \u0026 hci management node",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "hci baseboard management controller h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "fas/aff baseboard management controller 8300",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.13.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:e-series_santricity_os_controller:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:netapp:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_exposure_function:22.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:all_flash_fabric-attached_storage_8300_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:all_flash_fabric-attached_storage_8300:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fabric-attached_storage_8300_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fabric-attached_storage_8300:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:all_flash_fabric-attached_storage_8700_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:all_flash_fabric-attached_storage_8700:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fabric-attached_storage_8700_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fabric-attached_storage_8700:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:aff_a400_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:aff_a400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fabric-attached_storage_a400_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fabric-attached_storage_a400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h615c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h615c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "167097"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "167459"
},
{
"db": "PACKETSTORM",
"id": "169941"
}
],
"trust": 0.7
},
"cve": "CVE-2021-45485",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 5.0,
"confidentialityImpact": "Partial",
"exploitabilityScore": null,
"id": "CVE-2021-45485",
"impactScore": null,
"integrityImpact": "None",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.9,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-409116",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 7.5,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-45485",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-45485",
"trust": 1.8,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202112-2265",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-409116",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-45485",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn\u0027t properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses. Linux Kernel Exists in the use of cryptographic algorithms.Information may be obtained. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel security, bug fix, and enhancement update\nAdvisory ID: RHSA-2022:1988-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:1988\nIssue date: 2022-05-10\nCVE Names: CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 \n CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 \n CVE-2021-3669 CVE-2021-3743 CVE-2021-3744 \n CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 \n CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 \n CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 \n CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 \n CVE-2021-21781 CVE-2021-26401 CVE-2021-29154 \n CVE-2021-37159 CVE-2021-41864 CVE-2021-42739 \n CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 \n CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 \n CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 \n CVE-2022-0322 CVE-2022-1011 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nSecurity Fix(es):\n\n* kernel: fget: check that the fd still exists after getting a ref to it\n(CVE-2021-4083)\n\n* kernel: avoid cyclic entity chains due to malformed USB descriptors\n(CVE-2020-0404)\n\n* kernel: speculation on incompletely validated data on IBM Power9\n(CVE-2020-4788)\n\n* kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c\n(CVE-2020-13974)\n\n* kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a\nuse-after-free (CVE-2021-0941)\n\n* kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP()\n(CVE-2021-3612)\n\n* kernel: reading /proc/sysvipc/shm does not scale with large shared memory\nsegment counts (CVE-2021-3669)\n\n* kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c\n(CVE-2021-3743)\n\n* kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()\n(CVE-2021-3744)\n\n* kernel: possible use-after-free in bluetooth module (CVE-2021-3752)\n\n* kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg\nlimits and DoS attacks (CVE-2021-3759)\n\n* kernel: DoS in ccp_run_aes_gcm_cmd() function (CVE-2021-3764)\n\n* kernel: sctp: Invalid chunks may be used to remotely remove existing\nassociations (CVE-2021-3772)\n\n* kernel: lack of port sanity checking in natd and netfilter leads to\nexploit of OpenVPN clients (CVE-2021-3773)\n\n* kernel: possible leak or coruption of data residing on hugetlbfs\n(CVE-2021-4002)\n\n* kernel: security regression for CVE-2018-13405 (CVE-2021-4037)\n\n* kernel: Buffer overwrite in decode_nfs_fh function (CVE-2021-4157)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed\npackets replies (CVE-2021-20322)\n\n* kernel: arm: SIGPAGE information disclosure vulnerability\n(CVE-2021-21781)\n\n* hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 (CVE-2021-26401)\n\n* kernel: Local privilege escalation due to incorrect BPF JIT branch\ndisplacement computation (CVE-2021-29154)\n\n* kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c\n(CVE-2021-37159)\n\n* kernel: eBPF multiplication integer overflow in\nprealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to\nout-of-bounds write (CVE-2021-41864)\n\n* kernel: Heap buffer overflow in firedtv driver (CVE-2021-42739)\n\n* kernel: ppc: kvm: allows a malicious KVM guest to crash the host\n(CVE-2021-43056)\n\n* kernel: an array-index-out-bounds in detach_capi_ctr in\ndrivers/isdn/capi/kcapi.c (CVE-2021-43389)\n\n* kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c\nallows an attacker to cause DoS via crafted USB device (CVE-2021-43976)\n\n* kernel: use-after-free in the TEE subsystem (CVE-2021-44733)\n\n* kernel: information leak in the IPv6 implementation (CVE-2021-45485)\n\n* kernel: information leak in the IPv4 implementation (CVE-2021-45486)\n\n* hw: cpu: intel: Branch History Injection (BHI) (CVE-2022-0001)\n\n* hw: cpu: intel: Intra-Mode BTI (CVE-2022-0002)\n\n* kernel: Local denial of service in bond_ipsec_add_sa (CVE-2022-0286)\n\n* kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c\n(CVE-2022-0322)\n\n* kernel: FUSE allows UAF reads of write() buffers, allowing theft of\n(partial) /etc/shadow hashes (CVE-2022-1011)\n\n* kernel: use-after-free in nouveau kernel module (CVE-2020-27820)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1888433 - CVE-2020-4788 kernel: speculation on incompletely validated data on IBM Power9\n1901726 - CVE-2020-27820 kernel: use-after-free in nouveau kernel module\n1919791 - CVE-2020-0404 kernel: avoid cyclic entity chains due to malformed USB descriptors\n1946684 - CVE-2021-29154 kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation\n1951739 - CVE-2021-42739 kernel: Heap buffer overflow in firedtv driver\n1957375 - [RFE] x86, tsc: Add kcmdline args for skipping tsc calibration sequences\n1974079 - CVE-2021-3612 kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP()\n1981950 - CVE-2021-21781 kernel: arm: SIGPAGE information disclosure vulnerability\n1983894 - Hostnetwork pod to service backed by hostnetwork on the same node is not working with OVN Kubernetes\n1985353 - CVE-2021-37159 kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c\n1986473 - CVE-2021-3669 kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts\n1994390 - FIPS: deadlock between PID 1 and \"modprobe crypto-jitterentropy_rng\" at boot, preventing system to boot\n1997338 - block: update to upstream v5.14\n1997467 - CVE-2021-3764 kernel: DoS in ccp_run_aes_gcm_cmd() function\n1997961 - CVE-2021-3743 kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c\n1999544 - CVE-2021-3752 kernel: possible use-after-free in bluetooth module\n1999675 - CVE-2021-3759 kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks\n2000627 - CVE-2021-3744 kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()\n2000694 - CVE-2021-3772 kernel: sctp: Invalid chunks may be used to remotely remove existing associations\n2004949 - CVE-2021-3773 kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients\n2009312 - Incorrect system time reported by the cpu guest statistics (PPC only). \n2009521 - XFS: sync to upstream v5.11\n2010463 - CVE-2021-41864 kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write\n2011104 - statfs reports wrong free space for small quotas\n2013180 - CVE-2021-43389 kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c\n2014230 - CVE-2021-20322 kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies\n2015525 - SCTP peel-off with SELinux and containers in OCP\n2015755 - zram: zram leak with warning when running zram02.sh in ltp\n2016169 - CVE-2020-13974 kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c\n2017073 - CVE-2021-43056 kernel: ppc: kvm: allows a malicious KVM guest to crash the host\n2017796 - ceph omnibus backport for RHEL-8.6.0\n2018205 - CVE-2021-0941 kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free\n2022814 - Rebase the input and HID stack in 8.6 to v5.15\n2025003 - CVE-2021-43976 kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device\n2025726 - CVE-2021-4002 kernel: possible leak or coruption of data residing on hugetlbfs\n2027239 - CVE-2021-4037 kernel: security regression for CVE-2018-13405\n2029923 - CVE-2021-4083 kernel: fget: check that the fd still exists after getting a ref to it\n2030476 - Kernel 4.18.0-348.2.1 secpath_cache memory leak involving strongswan tunnel\n2030747 - CVE-2021-44733 kernel: use-after-free in the TEE subsystem\n2031200 - rename(2) fails on subfolder mounts when the share path has a trailing slash\n2034342 - CVE-2021-4157 kernel: Buffer overwrite in decode_nfs_fh function\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2037019 - CVE-2022-0286 kernel: Local denial of service in bond_ipsec_add_sa\n2039911 - CVE-2021-45485 kernel: information leak in the IPv6 implementation\n2039914 - CVE-2021-45486 kernel: information leak in the IPv4 implementation\n2042798 - [RHEL8.6][sfc] General sfc driver update\n2042822 - CVE-2022-0322 kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c\n2043453 - [RHEL8.6 wireless] stack \u0026 drivers general update to v5.16+\n2046021 - kernel 4.18.0-358.el8 async dirops causes write errors with namespace restricted caps\n2048251 - Selinux is not allowing SCTP connection setup between inter pod communication in enforcing mode\n2061700 - CVE-2021-26401 hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715\n2061712 - CVE-2022-0001 hw: cpu: intel: Branch History Injection (BHI)\n2061721 - CVE-2022-0002 hw: cpu: intel: Intra-Mode BTI\n2064855 - CVE-2022-1011 kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nkernel-4.18.0-372.9.1.el8.src.rpm\n\naarch64:\nbpftool-4.18.0-372.9.1.el8.aarch64.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-core-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-devel-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-headers-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-modules-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-libs-4.18.0-372.9.1.el8.aarch64.rpm\nperf-4.18.0-372.9.1.el8.aarch64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\npython3-perf-4.18.0-372.9.1.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\n\nnoarch:\nkernel-abi-stablelists-4.18.0-372.9.1.el8.noarch.rpm\nkernel-doc-4.18.0-372.9.1.el8.noarch.rpm\n\nppc64le:\nbpftool-4.18.0-372.9.1.el8.ppc64le.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-core-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-devel-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-headers-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-modules-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-libs-4.18.0-372.9.1.el8.ppc64le.rpm\nperf-4.18.0-372.9.1.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\npython3-perf-4.18.0-372.9.1.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\n\ns390x:\nbpftool-4.18.0-372.9.1.el8.s390x.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-core-4.18.0-372.9.1.el8.s390x.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debuginfo-common-s390x-4.18.0-372.9.1.el8.s390x.rpm\nkernel-devel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-headers-4.18.0-372.9.1.el8.s390x.rpm\nkernel-modules-4.18.0-372.9.1.el8.s390x.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.s390x.rpm\nkernel-tools-4.18.0-372.9.1.el8.s390x.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-core-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-devel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-modules-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-modules-extra-4.18.0-372.9.1.el8.s390x.rpm\nperf-4.18.0-372.9.1.el8.s390x.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\npython3-perf-4.18.0-372.9.1.el8.s390x.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\n\nx86_64:\nbpftool-4.18.0-372.9.1.el8.x86_64.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-core-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-devel-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-headers-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-modules-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-libs-4.18.0-372.9.1.el8.x86_64.rpm\nperf-4.18.0-372.9.1.el8.x86_64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\npython3-perf-4.18.0-372.9.1.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. 8):\n\naarch64:\nbpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-libs-devel-4.18.0-372.9.1.el8.aarch64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\n\nppc64le:\nbpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-libs-devel-4.18.0-372.9.1.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-libs-devel-4.18.0-372.9.1.el8.x86_64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-0404\nhttps://access.redhat.com/security/cve/CVE-2020-4788\nhttps://access.redhat.com/security/cve/CVE-2020-13974\nhttps://access.redhat.com/security/cve/CVE-2020-27820\nhttps://access.redhat.com/security/cve/CVE-2021-0941\nhttps://access.redhat.com/security/cve/CVE-2021-3612\nhttps://access.redhat.com/security/cve/CVE-2021-3669\nhttps://access.redhat.com/security/cve/CVE-2021-3743\nhttps://access.redhat.com/security/cve/CVE-2021-3744\nhttps://access.redhat.com/security/cve/CVE-2021-3752\nhttps://access.redhat.com/security/cve/CVE-2021-3759\nhttps://access.redhat.com/security/cve/CVE-2021-3764\nhttps://access.redhat.com/security/cve/CVE-2021-3772\nhttps://access.redhat.com/security/cve/CVE-2021-3773\nhttps://access.redhat.com/security/cve/CVE-2021-4002\nhttps://access.redhat.com/security/cve/CVE-2021-4037\nhttps://access.redhat.com/security/cve/CVE-2021-4083\nhttps://access.redhat.com/security/cve/CVE-2021-4157\nhttps://access.redhat.com/security/cve/CVE-2021-4197\nhttps://access.redhat.com/security/cve/CVE-2021-4203\nhttps://access.redhat.com/security/cve/CVE-2021-20322\nhttps://access.redhat.com/security/cve/CVE-2021-21781\nhttps://access.redhat.com/security/cve/CVE-2021-26401\nhttps://access.redhat.com/security/cve/CVE-2021-29154\nhttps://access.redhat.com/security/cve/CVE-2021-37159\nhttps://access.redhat.com/security/cve/CVE-2021-41864\nhttps://access.redhat.com/security/cve/CVE-2021-42739\nhttps://access.redhat.com/security/cve/CVE-2021-43056\nhttps://access.redhat.com/security/cve/CVE-2021-43389\nhttps://access.redhat.com/security/cve/CVE-2021-43976\nhttps://access.redhat.com/security/cve/CVE-2021-44733\nhttps://access.redhat.com/security/cve/CVE-2021-45485\nhttps://access.redhat.com/security/cve/CVE-2021-45486\nhttps://access.redhat.com/security/cve/CVE-2022-0001\nhttps://access.redhat.com/security/cve/CVE-2022-0002\nhttps://access.redhat.com/security/cve/CVE-2022-0286\nhttps://access.redhat.com/security/cve/CVE-2022-0322\nhttps://access.redhat.com/security/cve/CVE-2022-1011\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnqSF9zjgjWX9erEAQjBXQ/8DSpFUMNN6ZVFtli2KuVowVLS+14J0jtj\n0zxpr0skJT8vVulU3VTeURBMdg9NAo9bj3R5KTk2+dC+AMuHET5aoVvaYmimBGKL\n5qzpu7q9Z0aaD2I288suHCnYuRJnt+qKZtNa4hlcY92bN0tcYBonxsdIS2xM6xIu\nGHNS8HNVUNz4PuCBfmbITvgX9Qx+iZQVlVccDBG5LDpVwgOtnrxHKbe5E499v/9M\noVoN+eV9ulHAZdCHWlUAahbsvEqDraCKNT0nHq/xO5dprPjAcjeKYMeaICtblRr8\nk+IouGywaN+mW4sBjnaaiuw2eAtoXq/wHisX1iUdNkroqcx9NBshWMDBJnE4sxQJ\nZOSc8B6yjJItPvUI7eD3BDgoka/mdoyXTrg+9VRrir6vfDHPrFySLDrO1O5HM5fO\n3sExCVO2VM7QMCGHJ1zXXX4szk4SV/PRsjEesvHOyR2xTKZZWMsXe1h9gYslbADd\ntW0yco/G23xjxqOtMKuM/nShBChflMy9apssldiOfdqODJMv5d4rRpt0xgmtSOM6\nqReveuQCasmNrGlAHgDwbtWz01fmSuk9eYDhZNmHA3gxhoHIV/y+wr0CLbOQtDxT\np79nhiqwUo5VMj/X30Lu0Wl3ptLuhRWamzTCkEEzdubr8aVsT4RRNQU3KfVFfpT1\nMWp/2ui3i80=\n=Fdgy\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes: \n\n* nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* containerd: Unprivileged pod may bind mount any privileged regular file\non disk (CVE-2021-43816)\n\n* minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\n* imgcrypt: Unauthorized access to encryted container image on a shared\nsystem due to missing check in CheckAuthorization() code path\n(CVE-2022-24778)\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nBug fixes:\n\n* RFE Copy secret with specific secret namespace, name for source and name,\nnamespace and cluster label for target (BZ# 2014557)\n\n* RHACM 2.5.0 images (BZ# 2024938)\n\n* [UI] When you delete host agent from infraenv no confirmation message\nappear (Are you sure you want to delete x?) (BZ#2028348)\n\n* Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller\nnot working properly (BZ# 2028647)\n\n* create cluster pool -\u003e choose infra type, As a result infra providers\ndisappear from UI. (BZ# 2033339)\n\n* Restore/backup shows up as Validation failed but the restore backup\nstatus in ACM shows success (BZ# 2034279)\n\n* Observability - OCP 311 node role are not displayed completely (BZ#\n2038650)\n\n* Documented uninstall procedure leaves many leftovers (BZ# 2041921)\n\n* infrastructure-operator pod crashes due to insufficient privileges in ACM\n2.5 (BZ# 2046554)\n\n* Acm failed to install due to some missing CRDs in operator (BZ# 2047463)\n\n* Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)\n\n* ACM home page now includes /home/ in url (BZ# 2051299)\n\n* proxy heading in Add Credential should be capitalized (BZ# 2051349)\n\n* ACM 2.5 tries to create new MCE instance when install on top of existing\nMCE 2.0 (BZ# 2051983)\n\n* Create Policy button does not work and user cannot use console to create\npolicy (BZ# 2053264)\n\n* No cluster information was displayed after a policyset was created (BZ#\n2053366)\n\n* Dynamic plugin update does not take effect in Firefox (BZ# 2053516)\n\n* Replicated policy should not be available when creating a Policy Set (BZ#\n2054431)\n\n* Placement section in Policy Set wizard does not reset when users click\n\"Back\" to re-configured placement (BZ# 2054433)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2028224 - RHACM 2.5.0 images\n2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)\n2028647 - Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller not working properly\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2033339 - create cluster pool -\u003e choose infra type , As a result infra providers disappear from UI. \n2034279 - Restore/backup shows up as Validation failed but the restore backup status in ACM shows success\n2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API\n2038650 - Observability - OCP 311 node role are not displayed completely\n2041921 - Documented uninstall procedure leaves many leftovers\n2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2046554 - infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5\n2047463 - Acm failed to install due to some missing CRDs in operator\n2051298 - Navigation icons no longer showing in ACM 2.5\n2051299 - ACM home page now includes /home/ in url\n2051349 - proxy heading in Add Credential should be capitalized\n2051983 - ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2053264 - Create Policy button does not work and user cannot use console to create policy\n2053366 - No cluster information was displayed after a policyset was created\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2053516 - Dynamic plugin update does not take effect in Firefox\n2054431 - Replicated policy should not be available when creating a Policy Set\n2054433 - Placement section in Policy Set wizard does not reset when users click \"Back\" to re-configured placement\n2054772 - credentialName is not parsed correctly in UI notifications/alerts when creating/updating a discovery config\n2054860 - Cluster overview page crashes for on-prem cluster\n2055333 - Unable to delete assisted-service operator\n2055900 - If MCH is installed on existing MCE and both are in multicluster-engine namespace , uninstalling MCH terminates multicluster-engine namespace\n2056485 - [UI] In infraenv detail the host list don\u0027t have pagination\n2056701 - Non platform install fails agentclusterinstall CRD is outdated in rhacm2.5\n2057060 - [CAPI] Unable to create ClusterDeployment due to service account restrictions (ACM + Bundled Assisted)\n2058435 - Label cluster.open-cluster-management.io/backup-cluster stamped \u0027unknown\u0027 for velero backups\n2059779 - spec.nodeSelector is missing in MCE instance created by MCH upon installing ACM on infra nodes\n2059781 - Policy UI crashes when viewing details of configuration policies for backupschedule that does not exist\n2060135 - [assisted-install] agentServiceConfig left orphaned after uninstalling ACM\n2060151 - Policy set of the same name cannot be re-created after the previous one has been deleted\n2060230 - [UI] Delete host modal has incorrect host\u0027s name populated\n2060309 - multiclusterhub stuck in installing on \"ManagedClusterConditionAvailable\" [intermittent]\n2060469 - The development branch of the Submariner addon deploys 0.11.0, not 0.12.0\n2060550 - MCE installation hang due to no console-mce-console deployment available\n2060603 - prometheus doesn\u0027t display managed clusters\n2060831 - Observability - prometheus-operator failed to start on *KS\n2060934 - Cannot provision AWS OCP 4.9 cluster from Power Hub\n2061260 - The value of the policyset placement should be filtered space when input cluster label expression\n2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace\n2061659 - the network section in create cluster -\u003e Networking include the brace in the network title\n2061798 - [ACM 2.5] The service of Cluster Proxy addon was missing\n2061838 - ACM component subscriptions are removed when enabling spec.disableHubSelfManagement in MCH\n2062009 - No name validation is performed on Policy and Policy Set Wizards\n2062022 - cluster.open-cluster-management.io/backup-cluster of velero schedules should populate the corresponding hub clusterID\n2062025 - No validation is done on yaml\u0027s format or content in Policy and Policy Set wizards\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2062337 - velero schedules get re-created after the backupschedule is in \u0027BackupCollision\u0027 phase\n2062462 - Upgrade to 2.5 hang due to irreconcilable errors of grc-sub and search-prod-sub in MCH\n2062556 - Always return the policyset page after created the policy from UI\n2062787 - Submariner Add-on UI does not indicate on Broker error\n2063055 - User with cluserrolebinding of open-cluster-management:cluster-manager-admin role can\u0027t see policies and clusters page\n2063341 - Release imagesets are missing in the console for ocp 4.10\n2063345 - Application Lifecycle- UI shows white blank page when the page is Refreshed\n2063596 - claim clusters from clusterpool throws errors\n2063599 - Update the message in clusterset -\u003e clusterpool page since we did not allow to add clusterpool to clusterset by resourceassignment\n2063697 - Observability - MCOCR reports object-storage secret without AWS access_key in STS enabled env\n2064231 - Can not clean the instance type for worker pool when create the clusters\n2064247 - prefer UI can add the architecture type when create the cluster\n2064392 - multicloud oauth-proxy failed to log users in on web\n2064477 - Click at \"Edit Policy\" for each policy leads to a blank page\n2064509 - No option to view the ansible job details and its history in the Automation wizard after creation of the automation job\n2064516 - Unable to delete an automation job of a policy\n2064528 - Columns of Policy Set, Status and Source on Policy page are not sortable\n2064535 - Different messages on the empty pages of Overview and Clusters when policy is disabled\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064722 - [Tracker] [DR][ACM 2.5] Applications are not getting deployed on managed cluster\n2064899 - Failed to provision openshift 4.10 on bare metal\n2065436 - \"Filter\" drop-down list does not show entries of the policies that have no top-level remediation specified\n2066198 - Issues about disabled policy from UI\n2066207 - The new created policy should be always shown up on the first line\n2066333 - The message was confuse when the cluster status is Running\n2066383 - MCE install failing on proxy disconnected environment\n2066433 - Logout not working for ACM 2.5\n2066464 - console-mce-console pods throw ImagePullError after upgrading to ocp 4.10\n2066475 - User with view-only rolebinding should not be allowed to create policy, policy set and automation job\n2066544 - The search box can\u0027t work properly in Policies page\n2066594 - RFE: Can\u0027t open the helm source link of the backup-restore-enabled policy from UI\n2066650 - minor issues in cluster curator due to the startup throws errors\n2066751 - the image repo of application-manager did not updated to use the image repo in MCE/MCH configuration\n2066834 - Hibernating cluster(s) in cluster pool stuck in \u0027Stopping\u0027 status after restore activation\n2066842 - cluster pool credentials are not backed up\n2066914 - Unable to remove cluster value during configuration of the label expressions for policy and policy set\n2066940 - Validation fired out for https proxy when the link provided not starting with https\n2066965 - No message is displayed in Policy Wizard to indicate a policy externally managed\n2066979 - MIssing groups in policy filter options comparing to previous RHACM version\n2067053 - I was not able to remove the image mirror content when create the cluster\n2067067 - Can\u0027t filter the cluster info when clicked the cluster in the Placement section\n2067207 - Bare metal asset secrets are not backed up\n2067465 - Categories,Standards, and Controls annotations are not updated after user has deleted a selected template\n2067713 - Columns on policy\u0027s \"Results\" are not sort-able as in previous release\n2067728 - Can\u0027t search in the policy creation or policyset creation Yaml editor\n2068304 - Application Lifecycle- Replicasets arent showing the logs console in Topology\n2068309 - For policy wizard in dynamics plugin environment, buttons at the bottom should be sticky and the contents of the Policy should scroll\n2068312 - Application Lifecycle - Argo Apps are not showing overview details and topology after upgrading from 2.4\n2068313 - Application Lifecycle - Refreshing overview page leads to a blank page\n2068328 - A cluster\u0027s \"View history\" page should not contain all clusters\u0027 violations history\n2068387 - Observability - observability operator always CrashLoopBackOff in FIPS upgrading hub\n2068993 - Observability - Node list is not filtered according to nodeType on OCP 311 dashboard\n2069329 - config-policy-controller addon with \"Unknown\" status in OCP 3.11 managed cluster after upgrade hub to 2.5\n2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path\n2069469 - Status of unreachable clusters is not reported in several places on GRC panels\n2069615 - The YAML editor can\u0027t work well when login UI using dynamic console plugin\n2069622 - No validation for policy template\u0027s name\n2069698 - After claim a cluster from clusterpool, the cluster pages become very very slow\n2069867 - Error occurs when trying to edit an application set/subscription\n2069870 - ACM/MCE Dynamic Plugins - 404: Page Not Found Error Occurs - intermittent crashing\n2069875 - Cluster secrets are not being created in the managed cluster\u0027s namespace\n2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar\n2070203 - Blank Application is shown when editing an Application with AnsibleJobs\n2070782 - Failed Secret Propagation to the Same Namespace as the AnsibleJob CR\n2070846 - [ACM 2.5] Can\u0027t re-add the default clusterset label after removing it from a managedcluster on BM SNO hub\n2071066 - Policy set details panel does not work when deployed into namespace different than \"default\"\n2071173 - Configured RunOnce automation job is not displayed although the policy has no violation\n2071191 - MIssing title on details panel after clicking \"view details\" of a policy set card\n2071769 - Placement must be always configured or error is reported when creating a policy\n2071818 - ACM logo not displayed in About info modal\n2071869 - Topology includes the status of local cluster resources when Application is only deployed to managed cluster\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2072097 - Local Cluster is shown as Remote on the Application Overview Page and Single App Overview Page\n2072104 - Inconsistent \"Not Deployed\" Icon Used Between 2.4 and 2.5 as well as the Overview and Topology\n2072177 - Cluster Resource Status is showing App Definition Statuses as well\n2072227 - Sidebar Statuses Need to Be Updated to Reflect Cluster List and Cluster Resource Statuses\n2072231 - Local Cluster not included in the appsubreport for Helm Applications Deployed on All Clusters\n2072334 - Redirect URL is now to the details page after created a policy\n2072342 - Shows \"NaN%\" in the ring chart when add the disabled policy into policyset and view its details\n2072350 - CRD Deployed via Application Console does not have correct deployment status and spelling\n2072359 - Report the error when editing compliance type in the YAML editor and then submit the changes\n2072504 - The policy has violations on the failed managed cluster\n2072551 - URL dropdown is not being rendered with an Argo App with a new URL\n2072773 - When a channel is deleted and recreated through the App Wizard, application creation stalls and warning pops up\n2072824 - The edit/delete policyset button should be greyed when using viewer check\n2072829 - When Argo App with jsonnet object is deployed, topology and cluster status would fail to display the correct statuses. \n2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub\n2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO\n2073355 - Get blank page when click policy with unknown status in Governance -\u003e Overview page\n2073508 - Thread responsible to get insights data from *ks clusters is broken\n2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters\n2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology\n2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin\n2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of \"resource conflict\" error\n2074178 - Editing Helm Argo Applications does not Prune Old Resources\n2074626 - Policy placement failure during ZTP SNO scale test\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal\n2074937 - UI allows creating cluster even when there are no ClusterImageSets\n2075416 - infraEnv failed to create image after restore\n2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod\n2075739 - The lookup function won\u0027t check the referred resource whether exist when using template policies\n2076421 - Can\u0027t select existing placement for policy or policyset when editing policy or policyset\n2076494 - No policyreport CR for spoke clusters generated in the disconnected env\n2076502 - The policyset card doesn\u0027t show the cluster status(violation/without violation) again after deleted one policy\n2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator\n2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster\n2077291 - Prometheus doesn\u0027t display acm_managed_cluster_info after upgrade from 2.4 to 2.5\n2077304 - Create Cluster button is disabled only if other clusters exist\n2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5\n2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI\n2077751 - Can\u0027t create a template policy from UI when the object\u0027s name is referring Golang text template syntax in this policy\n2077783 - Still show violation for clusterserviceversions after enforced \"Detect Image vulnerabilities \" policy template and the operator is installed\n2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set\n2078164 - Failed to edit a policy without placement\n2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement\n2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists\n2078617 - Azure public credential details get pre-populated with base domain name in UI\n2078952 - View pod logs in search details returns error\n2078973 - Crashed pod is marked with success in Topology\n2079013 - Changing existing placement rules does not change YAML file\n2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on\n2079494 - Hitting Enter in yaml editor caused unexpected keys \"key00x:\" to be created\n2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5\n2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page\n2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted\n2079615 - Edit appset placement in UI with a new placement throws error upon submitting\n2079658 - Cluster Count is Incorrect in Application UI\n2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters\n2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080503 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page\n2080712 - Select an existing placement configuration does not work\n2080776 - Unrecognized characters are displayed on policy and policy set yaml editors\n2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster\n2081810 - Type \u0027-\u0027 character in Name field caused previously typed character backspaced in in the name field of policy wizard\n2081829 - Application deployed on local cluster\u0027s topology is crashing after upgrade\n2081938 - The deleted policy still be shown on the policyset review page when edit this policy set\n2082226 - Object Storage Topology includes residue of resources after Upgrade\n2082409 - Policy set details panel remains even after the policy set has been deleted\n2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets\n2083038 - Warning still refers to the `klusterlet-addon-appmgr` pod rather than the `application-manager` pod\n2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated\n2083434 - The provider-credential-controller did not support the RHV credentials type\n2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace\n2083870 - When editing an existing application and refreshing the `Select an existing placement configuration`, multiple occurrences of the placementrule gets displayed\n2084034 - The status message looks messy in the policy set card, suggest one kind status one a row\n2084158 - Support provisioning bm cluster where no provisioning network provided\n2084622 - Local Helm application shows cluster resources as `Not Deployed` in Topology [Upgrade]\n2085083 - Policies fail to copy to cluster namespace after ACM upgrade\n2085237 - Resources referenced by a channel are not annotated with backup label\n2085273 - Error querying for ansible job in app topology\n2085281 - Template name error is reported but the template name was found in a different replicated policy\n2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page\n2087515 - Validation thrown out in configuration for disconnect install while creating bm credential\n2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]\n2088511 - Some cluster resources are not showing labels that are defined in the YAML\n\n5. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.53. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:7873\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, and ppc64le architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags\n\nThe sha values for the release are:\n\n(For x86_64 architecture)\nThe image digest is\nsha256:ac2bbfa7036c64bbdb44f9a74df3dbafcff1b851d812bf2a48c4fabcac3c7a53\n\n(For s390x architecture)\nThe image digest is\nsha256:ac2c74a664257cea299126d4f789cdf9a5a4efc4a4e8c2361b943374d4eb21e4\n\n(For ppc64le architecture)\nThe image digest is\nsha256:53adc42ed30ad39d7117837dbf5a6db6943a8f0b3b61bc0d046b83394f5c28b2\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2077100 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-2205 - Prefer local dns does not work expectedly on OCPv4.8\nOCPBUGS-2347 - [cluster-api-provider-baremetal] fix 4.8 build\nOCPBUGS-2577 - [4.8] ETCD Operator goes degraded when a second internal node ip is added\nOCPBUGS-2773 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name\nOCPBUGS-2989 - [4.8] cri-o should report the stage of container and pod creation it\u0027s stuck at\n\n6",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "PACKETSTORM",
"id": "167097"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "167459"
},
{
"db": "PACKETSTORM",
"id": "169941"
}
],
"trust": 2.43
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-45485",
"trust": 4.1
},
{
"db": "PACKETSTORM",
"id": "169941",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169695",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169997",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169719",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166101",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "169411",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0205",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5536",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1225",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6062",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0215",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0061",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0380",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6111",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2855",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0615",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3236",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3136",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0121",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5590",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070643",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062931",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-409116",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-45485",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167097",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167622",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167072",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167330",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167679",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167459",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "PACKETSTORM",
"id": "167097"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "167459"
},
{
"db": "PACKETSTORM",
"id": "169941"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"id": "VAR-202112-2255",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T20:30:28.280000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "NTAP-20220121-0001",
"trust": 0.8,
"url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/changelog-5.13.3"
},
{
"title": "Linux kernel Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=177039"
},
{
"title": "Red Hat: Important: kernel security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226983 - security advisory"
},
{
"title": "Red Hat: Important: kernel-rt security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226991 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.9.7 Images security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228609 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.8.53 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227874 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.10.39 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227211 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.9.51 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227216 - security advisory"
},
{
"title": "Ubuntu Security Notice: USN-5299-1: Linux kernel vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5299-1"
},
{
"title": "Red Hat: Important: kernel security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221988 - security advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.6.5 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20224814 - security advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225483 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.5 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225201 - security advisory"
},
{
"title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20224956 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225392 - security advisory"
},
{
"title": "Ubuntu Security Notice: USN-5343-1: Linux kernel vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5343-1"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/syrti/poc_to_review "
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/trhacknon/pocingit "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-327",
"trust": 1.1
},
{
"problemtype": "Use of incomplete or dangerous cryptographic algorithms (CWE-327) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.6,
"url": "https://arxiv.org/pdf/2112.09604.pdf"
},
{
"trust": 2.6,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20220121-0001/"
},
{
"trust": 1.8,
"url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/changelog-5.13.3"
},
{
"trust": 1.8,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62f20e068ccc50d6ab66fdb72ba90da2b9418c99"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45485"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-45486"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-45485"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3752"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-4157"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3744"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-27820"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3743"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-1011"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-4037"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-29154"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3759"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-37159"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3772"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-0404"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3669"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3764"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13974"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20322"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-0322"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3773"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-4002"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-41864"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-43976"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-4197"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-0002"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-4203"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-0941"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-43389"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-44733"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3612"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-42739"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-0286"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-0001"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-26401"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1225"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2855"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169411/red-hat-security-advisory-2022-6991-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169719/red-hat-security-advisory-2022-7216-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0061"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0121"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0380"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169997/red-hat-security-advisory-2022-8609-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169695/red-hat-security-advisory-2022-7211-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062931"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5590"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169941/red-hat-security-advisory-2022-7874-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6062"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-information-disclosure-via-ipv6-id-generation-37138"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6111"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166101/ubuntu-security-notice-usn-5299-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0615"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3136"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070643"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0205"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3236"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0215"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5536"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-43056"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-4788"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-21781"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-42739"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41864"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21803"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3807"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/327.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6983"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5299-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1988"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4305"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38185"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3697"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28734"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28737"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3695"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5392"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1975"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43976"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5483"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43816"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4956"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3918"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2588"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7874"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39399"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2588"
},
{
"trust": 0.1,
"url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45486"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:7873"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30323"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41974"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "PACKETSTORM",
"id": "167097"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "167459"
},
{
"db": "PACKETSTORM",
"id": "169941"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "PACKETSTORM",
"id": "167097"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "167459"
},
{
"db": "PACKETSTORM",
"id": "169941"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-12-25T00:00:00",
"db": "VULHUB",
"id": "VHN-409116"
},
{
"date": "2021-12-25T00:00:00",
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"date": "2023-01-18T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"date": "2022-05-11T16:54:36",
"db": "PACKETSTORM",
"id": "167097"
},
{
"date": "2022-06-29T20:27:02",
"db": "PACKETSTORM",
"id": "167622"
},
{
"date": "2022-05-11T16:37:26",
"db": "PACKETSTORM",
"id": "167072"
},
{
"date": "2022-05-31T17:24:53",
"db": "PACKETSTORM",
"id": "167330"
},
{
"date": "2022-07-01T15:04:32",
"db": "PACKETSTORM",
"id": "167679"
},
{
"date": "2022-06-09T16:11:52",
"db": "PACKETSTORM",
"id": "167459"
},
{
"date": "2022-11-18T14:28:39",
"db": "PACKETSTORM",
"id": "169941"
},
{
"date": "2021-12-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"date": "2021-12-25T02:15:06.667000",
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-24T00:00:00",
"db": "VULHUB",
"id": "VHN-409116"
},
{
"date": "2023-02-24T00:00:00",
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"date": "2023-01-18T05:28:00",
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"date": "2023-01-04T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"date": "2023-02-24T15:07:31.653000",
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Linux\u00a0Kernel\u00a0 Vulnerability in using cryptographic algorithms in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "encryption problem",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
}
],
"trust": 0.6
}
}
VAR-202006-0222
Vulnerability from variot - Updated: 2024-07-23 20:28libpcre in PCRE before 8.44 allows an integer overflow via a large number after a (?C substring. PCRE is an open source regular expression library written in C language by Philip Hazel software developer. An input validation error vulnerability exists in libpcre in versions prior to PCRE 8.44. An attacker could exploit this vulnerability to execute arbitrary code or cause an application to crash on the system with a large number of requests. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-02-01-1 macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave
macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212147.
Analytics Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2021-1761: Cees Elzinga
APFS Available for: macOS Big Sur 11.0.1 Impact: A local user may be able to read arbitrary files Description: The issue was addressed with improved permissions logic. CVE-2021-1797: Thomas Tempelmann
CFNetwork Cache Available for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6 Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An integer overflow was addressed with improved input validation. CVE-2020-27945: Zhuo Liang of Qihoo 360 Vulcan Team
CoreAnimation Available for: macOS Big Sur 11.0.1 Impact: A malicious application could execute arbitrary code leading to compromise of user information Description: A memory corruption issue was addressed with improved state management. CVE-2021-1760: @S0rryMybad of 360 Vulcan Team
CoreAudio Available for: macOS Big Sur 11.0.1 Impact: Processing maliciously crafted web content may lead to code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-1747: JunDong Xie of Ant Security Light-Year Lab
CoreGraphics Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-1776: Ivan Fratric of Google Project Zero
CoreMedia Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1759: Hou JingYi (@hjy79425575) of Qihoo 360 CERT
CoreText Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted text file may lead to arbitrary code execution Description: A stack overflow was addressed with improved input validation. CVE-2021-1772: Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
CoreText Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A remote attacker may be able to cause arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1792: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
Crash Reporter Available for: macOS Catalina 10.15.7 Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2021-1761: Cees Elzinga
Crash Reporter Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A local attacker may be able to elevate their privileges Description: Multiple issues were addressed with improved logic. CVE-2021-1787: James Hutchins
Crash Reporter Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A local user may be able to create or modify system files Description: A logic issue was addressed with improved state management. CVE-2021-1786: Csaba Fitzl (@theevilbit) of Offensive Security
Directory Utility Available for: macOS Catalina 10.15.7 Impact: A malicious application may be able to access private information Description: A logic issue was addressed with improved state management. CVE-2020-27937: Wojciech Reguła (@_r3ggi) of SecuRing
Endpoint Security Available for: macOS Catalina 10.15.7 Impact: A local attacker may be able to elevate their privileges Description: A logic issue was addressed with improved state management. CVE-2021-1802: Zhongcheng Li (@CK01) from WPS Security Response Center
FairPlay Available for: macOS Big Sur 11.0.1 Impact: A malicious application may be able to disclose kernel memory Description: An out-of-bounds read issue existed that led to the disclosure of kernel memory. This was addressed with improved input validation. CVE-2021-1791: Junzhi Lu (@pwn0rz), Qi Sun & Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
FontParser Available for: macOS Catalina 10.15.7 Impact: Processing a maliciously crafted font may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1790: Peter Nguyen Vu Hoang of STAR Labs
FontParser Available for: macOS Mojave 10.14.6 Impact: Processing a maliciously crafted font may lead to arbitrary code execution Description: This issue was addressed by removing the vulnerable code. CVE-2021-1775: Mickey Jin and Qi Sun of Trend Micro
FontParser Available for: macOS Mojave 10.14.6 Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-29608: Xingwei Lin of Ant Security Light-Year Lab
FontParser Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: A remote attacker may be able to cause arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1758: Peter Nguyen of STAR Labs
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An access issue was addressed with improved memory management. CVE-2021-1783: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1741: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1743: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative, Xingwei Lin of Ant Security Light- Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to a denial of service Description: A logic issue was addressed with improved state management. CVE-2021-1773: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to a denial of service Description: An out-of-bounds read issue existed in the curl. This issue was addressed with improved bounds checking. CVE-2021-1778: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1736: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1785: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted image may lead to a denial of service Description: This issue was addressed with improved checks. CVE-2021-1766: Danny Rosseau of Carve Systems
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-1818: Xingwei Lin from Ant-Financial Light-Year Security Lab
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-1742: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1746: Mickey Jin & Qi Sun of Trend Micro, Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1754: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1774: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1777: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1793: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-1737: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1738: Lei Sun CVE-2021-1744: Xingwei Lin of Ant Security Light-Year Lab
IOKit Available for: macOS Big Sur 11.0.1 Impact: An application may be able to execute arbitrary code with system privileges Description: A logic error in kext loading was addressed with improved state handling. CVE-2021-1779: Csaba Fitzl (@theevilbit) of Offensive Security
IOSkywalkFamily Available for: macOS Big Sur 11.0.1 Impact: A local attacker may be able to elevate their privileges Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1757: Pan ZhenPeng (@Peterpan0927) of Alibaba Security, Proteas
Kernel Available for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6 Impact: An application may be able to execute arbitrary code with kernel privileges Description: A logic issue existed resulting in memory corruption. This was addressed with improved state management. CVE-2020-27904: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong Security Lab
Kernel Available for: macOS Big Sur 11.0.1 Impact: A remote attacker may be able to cause a denial of service Description: A use after free issue was addressed with improved memory management. CVE-2021-1764: @m00nbsd
Kernel Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A malicious application may be able to elevate privileges. Apple is aware of a report that this issue may have been actively exploited. Description: A race condition was addressed with improved locking. CVE-2021-1782: an anonymous researcher
Kernel Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: An application may be able to execute arbitrary code with kernel privileges Description: Multiple issues were addressed with improved logic. CVE-2021-1750: @0xalsr
Login Window Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: An attacker in a privileged network position may be able to bypass authentication policy Description: An authentication issue was addressed with improved state management. CVE-2020-29633: Jewel Lambert of Original Spin, LLC.
Messages Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A user that is removed from an iMessage group could rejoin the group Description: This issue was addressed with improved checks. CVE-2021-1771: Shreyas Ranganatha (@strawsnoceans)
Model I/O Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-1762: Mickey Jin of Trend Micro
Model I/O Available for: macOS Catalina 10.15.7 Impact: Processing a maliciously crafted file may lead to heap corruption Description: This issue was addressed with improved checks. CVE-2020-29614: ZhiWei Sun (@5n1p3r0010) from Topsec Alpha Lab
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2021-1763: Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted image may lead to heap corruption Description: This issue was addressed with improved checks. CVE-2021-1767: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1745: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1753: Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1768: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
NetFSFramework Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-1751: Mikko Kenttälä (@Turmio_) of SensorFu
OpenLDAP Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2020-25709
Power Management Available for: macOS Mojave 10.14.6, macOS Catalina 10.15.7 Impact: A malicious application may be able to elevate privileges Description: A logic issue was addressed with improved state management. CVE-2020-27938: Tim Michaud (@TimGMichaud) of Leviathan
Screen Sharing Available for: macOS Big Sur 11.0.1 Impact: Multiple issues in pcre Description: Multiple issues were addressed by updating to version 8.44. CVE-2019-20838 CVE-2020-14155
SQLite Available for: macOS Catalina 10.15.7 Impact: Multiple issues in SQLite Description: Multiple issues were addressed by updating SQLite to version 3.32.3. CVE-2020-15358
Swift Available for: macOS Big Sur 11.0.1 Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: A logic issue was addressed with improved validation. CVE-2021-1769: CodeColorist of Ant-Financial Light-Year Labs
WebKit Available for: macOS Big Sur 11.0.1 Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-1788: Francisco Alonso (@revskills)
WebKit Available for: macOS Big Sur 11.0.1 Impact: Maliciously crafted web content may violate iframe sandboxing policy Description: This issue was addressed with improved iframe sandbox enforcement. CVE-2021-1765: Eliya Stein of Confiant CVE-2021-1801: Eliya Stein of Confiant
WebKit Available for: macOS Big Sur 11.0.1 Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-1789: @S0rryMybad of 360 Vulcan Team
WebKit Available for: macOS Big Sur 11.0.1 Impact: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. Description: A logic issue was addressed with improved restrictions. CVE-2021-1871: an anonymous researcher CVE-2021-1870: an anonymous researcher
WebRTC Available for: macOS Big Sur 11.0.1 Impact: A malicious website may be able to access restricted ports on arbitrary servers Description: A port redirection issue was addressed with additional port validation. CVE-2021-1799: Gregory Vishnepolsky & Ben Seri of Armis Security, and Samy Kamkar
Additional recognition
Kernel We would like to acknowledge Junzhi Lu (@pwn0rz), Mickey Jin & Jesse Change of Trend Micro for their assistance.
libpthread We would like to acknowledge CodeColorist of Ant-Financial Light-Year Labs for their assistance.
Login Window We would like to acknowledge Jose Moises Romero-Villanueva of CrySolve for their assistance.
Mail Drafts We would like to acknowledge Jon Bottarini of HackerOne for their assistance.
Screen Sharing Server We would like to acknowledge @gorelics for their assistance.
WebRTC We would like to acknowledge Philipp Hancke for their assistance.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmAYgrkACgkQZcsbuWJ6 jjATvhAAmcspGY8ZHJcSUGr9mysz5iT9oGkZcvFa8kcJsFAvFb9Wjz0M2eovBXQc D9bD7LrUpodiqkSobB4bEevpD9P8E/T/eRSBxjomKLv5DKHPT4eh/K2EU6R6ubVi GGNlT9DJrIxcTJIB2y/yfs8msV2w2/gZDLKJZP4Zh6t8G1sjI17iEaxpOph67aq2 X0d+P7+7q1mUBa47JEQ+HIUNlfHtBL825cnmHD2Vn1WELQLKZfXBl+nPM9l9naRc 3vYIvR7xJ5c4bqFx7N9xwGdQ5TRIoDijqADwggGwOZEiVZ7PWifj/iCLUz4Ks4hr oGVE1UxN1oSX63D44ZQyfiyIWIiMtDV9V4J6mUoUnZ6RTTMoRRAF9DcSVF5/wmHk odYnMeouHc543ZyVBtdtwJ/tbuBvTOjzpNn0+UgiyRL9wG/xxQq+gB4vwgSEviek bBhyvdxLVWW0ULwFeN5rI5bCQBkv6BB9OSyhD6sMRrp59NAgBBS2nstZG1RAt7XL 2KZ1GpoNcuDRLj7ElxAfeJuPM1dFVTK48SH56M1FElz/QowZVOXyKgUoaeVTUyAC 3WOACmFAosFIclCbr8z8yGynX2bsCGBNKv4pKoHlyZCyFHCQw9L6uR2gRkOp86+M iqHtE2L1WUZvUMCIKxfdixILEfoacSVCxr3+v4SSDOcEbSDYEIA= =mUkG -----END PGP SIGNATURE-----
. Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API.
Security Fix(es):
-
nodejs-immer: prototype pollution may lead to DoS or remote code execution (CVE-2021-3757)
-
mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Description:
This release adds the new Apache HTTP Server 2.4.37 Service Pack 10 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release.
Security Fix(es):
- httpd: Single zero byte stack overflow in mod_auth_digest (CVE-2020-35452)
- httpd: mod_session NULL pointer dereference in parser (CVE-2021-26690)
- httpd: Heap overflow in mod_session (CVE-2021-26691)
- httpd: mod_proxy_wstunnel tunneling of non Upgraded connection (CVE-2019-17567)
- httpd: MergeSlashes regression (CVE-2021-30641)
- httpd: mod_proxy NULL pointer dereference (CVE-2020-13950)
- jbcs-httpd24-openssl: openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
- openssl: Read buffer overruns processing ASN.1 strings (CVE-2021-3712)
- openssl: integer overflow in CipherUpdate (CVE-2021-23840)
- pcre: buffer over-read in JIT when UTF is disabled (CVE-2019-20838)
- pcre: integer overflow in libpcre (CVE-2020-14155)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1848436 - CVE-2020-14155 pcre: Integer overflow when parsing callout numeric arguments 1848444 - CVE-2019-20838 pcre: Buffer over-read in JIT when UTF is disabled and \X or \R has fixed quantifier greater than 1 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1966724 - CVE-2020-35452 httpd: Single zero byte stack overflow in mod_auth_digest 1966729 - CVE-2021-26690 httpd: mod_session: NULL pointer dereference when parsing Cookie header 1966732 - CVE-2021-26691 httpd: mod_session: Heap overflow via a crafted SessionHeader value 1966738 - CVE-2020-13950 httpd: mod_proxy NULL pointer dereference 1966740 - CVE-2019-17567 httpd: mod_proxy_wstunnel tunneling of non Upgraded connection 1966743 - CVE-2021-30641 httpd: Unexpected URL matching with 'MergeSlashes OFF' 1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings
-
Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
Bugs fixed (https://bugzilla.redhat.com/):
1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
- JIRA issues fixed (https://issues.jboss.org/):
TRACING-2235 - Release RHOSDT 2.1
- ========================================================================== Ubuntu Security Notice USN-5425-1 May 17, 2022
pcre3 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in PCRE.
Software Description: - pcre3: Perl 5 Compatible Regular Expression Library
Details:
Yunho Kim discovered that PCRE incorrectly handled memory when handling certain regular expressions. An attacker could possibly use this issue to cause applications using PCRE to expose sensitive information. This issue only affects Ubuntu 18.04 LTS, Ubuntu 20.04 LTS, Ubuntu 21.10 and Ubuntu 22.04 LTS. (CVE-2019-20838)
It was discovered that PCRE incorrectly handled memory when handling certain regular expressions. An attacker could possibly use this issue to cause applications using PCRE to have unexpected behavior. This issue only affects Ubuntu 14.04 ESM, Ubuntu 16.04 ESM, Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. (CVE-2020-14155)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libpcre3 2:8.39-13ubuntu0.22.04.1
Ubuntu 21.10: libpcre3 2:8.39-13ubuntu0.21.10.1
Ubuntu 20.04 LTS: libpcre3 2:8.39-12ubuntu0.1
Ubuntu 18.04 LTS: libpcre3 2:8.39-9ubuntu0.1
Ubuntu 16.04 ESM: libpcre3 2:8.38-3.1ubuntu0.1~esm1
Ubuntu 14.04 ESM: libpcre3 1:8.31-2ubuntu2.3+esm1
After a standard system update you need to restart applications using PCRE, such as the Apache HTTP server and Nginx, to make all the necessary changes. Summary:
Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5068
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
- nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
- sanitize-url: XSS (CVE-2021-23648)
- minimist: prototype pollution (CVE-2021-44906)
- node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
- prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
- golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
- go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
- opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64
The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64
The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x
The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le
The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts
1822752 - cluster-version operator stops applying manifests when blocked by a precondition check
1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image
1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV
1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name
1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource
1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group
1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready
1905850 - oc adm policy who-can failed to check the operatorcondition/status resource
1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)
1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource
1918005 - [vsphere] If there are multiple port groups with the same name installation fails
1918417 - IPv6 errors after exiting crictl
1918690 - Should update the KCM resource-graph timely with the latest configure
1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok"
1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded
1923536 - Image pullthrough does not pass 429 errors back to capable clients
1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API
1932812 - Installer uses the terraform-provider in the Installer's directory if it exists
1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value
1943937 - CatalogSource incorrect parsing validation
1944264 - [ovn] CNO should gracefully terminate OVN databases
1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2
1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled
1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV
1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x
1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
1957668 - oc login does not show link to console
1958198 - authentication operator takes too long to pick up a configuration change
1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true
1961233 - Add CI test coverage for DNS availability during upgrades
1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects
1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata
1965934 - can not get new result with "Refresh off" if click "Run queries" again
1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone.
1968253 - GCP CSI driver can provision volume with access mode ROX
1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones
1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases
1976111 - [tracker] multipathd.socket is missing start conditions
1976782 - Openshift registry starts to segfault after S3 storage configuration
1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory"
1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"]
1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8
1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
1982737 - OLM does not warn on invalid CSV
1983056 - IP conflict while recreating Pod with fixed name
1984785 - LSO CSV does not contain disconnected annotation
1989610 - Unsupported data types should not be rendered on operand details page
1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager
1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1994117 - Some hardcodes are detected at the code level in orphaned code
1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs
1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods
1996544 - AWS region ap-northeast-3 is missing in installer prompt
1996638 - Helm operator manager container restart when CR is creating&deleting
1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace
1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow
1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc
1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered
1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource
1999891 - must-gather collects backup data even when Pods fails to be created
2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap
2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks
2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap
2002868 - Node exporter not able to scrape OVS metrics
2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet
2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO
2006067 - Objects are not valid as a React child
2006201 - ovirt-csi-driver-node pods are crashing intermittently
2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
2007340 - Accessibility issues on topology - list view
2007611 - TLS issues with the internal registry and AWS S3 bucket
2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge
2008486 - Double scroll bar shows up on dragging the task quick search to the bottom
2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19
2009352 - Add image-registry usage metrics to telemeter
2009845 - Respect overrides changes during installation
2010361 - OpenShift Alerting Rules Style-Guide Compliance
2010364 - OpenShift Alerting Rules Style-Guide Compliance
2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS
2011895 - Details about cloud errors are missing from PV/PVC errors
2012111 - LSO still try to find localvolumeset which is already deleted
2012969 - need to figure out why osupdatedstart to reboot is zero seconds
2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)
2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user
2013734 - unable to label downloads route in openshift-console namespace
2013822 - ensure that the container-tools content comes from the RHAOS plashets
2014161 - PipelineRun logs are delayed and stuck on a high log volume
2014240 - Image registry uses ICSPs only when source exactly matches image
2014420 - Topology page is crashed
2014640 - Cannot change storage class of boot disk when cloning from template
2015023 - Operator objects are re-created even after deleting it
2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance
2015356 - Different status shows on VM list page and details page
2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types
2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff
2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource
2016534 - externalIP does not work when egressIP is also present
2017001 - Topology context menu for Serverless components always open downwards
2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs
2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI
2019532 - Logger object in LSO does not log source location accurately
2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted
2020483 - Parameter $auto_interval_period is in Period drop-down list
2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working
2021041 - [vsphere] Not found TagCategory when destroying ipi cluster
2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible
2022253 - Web terminal view is broken
2022507 - Pods stuck in OutOfpods state after running cluster-density
2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2022745 - Cluster reader is not able to list NodeNetwork objects
2023295 - Must-gather tool gathering data from custom namespaces.
2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes
2024427 - oc completion zsh doesn't auto complete
2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" )
2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation
2026356 - [IPI on Azure] The bootstrap machine type should be same as master
2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted
2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2027613 - Users can't silence alerts from the dev console
2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition
2028532 - noobaa-pg-db-0 pod stuck in Init:0/2
2028821 - Misspelled label in ODF management UI - MCG performance view
2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf
2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision
2029797 - Uncaught exception: ResizeObserver loop limit exceeded
2029835 - CSI migration for vSphere: Inline-volume tests failing
2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host
2030530 - VM created via customize wizard has single quotation marks surrounding its password
2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled
2030776 - e2e-operator always uses quay master images during presubmit tests
2032559 - CNO allows migration to dual-stack in unsupported configurations
2032717 - Unable to download ignition after coreos-installer install --copy-network
2032924 - PVs are not being cleaned up after PVC deletion
2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation
2033575 - monitoring targets are down after the cluster run for more than 1 day
2033711 - IBM VPC operator needs e2e csi tests for ibmcloud
2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address
2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4
2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37
2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save
2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated
2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
2035005 - MCD is not always removing in progress taint after a successful update
2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks
2035899 - Operator-sdk run bundle doesn't support arm64 env
2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work
2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd
2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default
2037447 - Ingress Operator is not closing TCP connections.
2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found
2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height
2037610 - typo for the Terminated message from thanos-querier pod description info
2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10
2037625 - AppliedClusterResourceQuotas can not be shown on project overview
2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption
2037628 - Add test id to kms flows for automation
2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster
2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied
2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack
2038115 - Namespace and application bar is not sticky anymore
2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations
2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken
2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group
2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image
2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2039253 - ovnkube-node crashes on duplicate endpoints
2039256 - Domain validation fails when TLD contains a digit.
2039277 - Topology list view items are not highlighted on keyboard navigation
2039462 - Application tab in User Preferences dropdown menus are too wide.
2039477 - validation icon is missing from Import from git
2039589 - The toolbox command always ignores [command] the first time
2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project
2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column
2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names
2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong
2040488 - OpenShift-Ansible BYOH Unit Tests are Broken
2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard
2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits
2040779 - Nodeport svc not accessible when the backend pod is on a window node
2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes
2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted'
2041454 - Garbage values accepted for --reference-policy in oc import-image without any error
2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work
2041769 - Pipeline Metrics page not showing data for normal user
2041774 - Failing git detection should not recommend Devfiles as import strategy
2041814 - The KubeletConfigController wrongly process multiple confs for a pool
2041940 - Namespace pre-population not happening till a Pod is created
2042027 - Incorrect feedback for "oc label pods --all"
2042348 - Volume ID is missing in output message when expanding volume which is not mounted.
2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15
2042501 - use lease for leader election
2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps
2042652 - Unable to deploy hw-event-proxy operator
2042838 - The status of container is not consistent on Container details and pod details page
2042852 - Topology toolbars are unaligned to other toolbars
2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP
2043035 - Wrong error code provided when request contains invalid argument
2043068 - available of text disappears in Utilization item if x is 0
2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist
2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away
2043118 - Host should transition through Preparing when HostFirmwareSettings changed
2043132 - Add a metric when vsphere csi storageclass creation fails
2043314 - oc debug node does not meet compliance requirement
2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining
2043428 - Address Alibaba CSI driver operator review comments
2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release
2043672 - [MAPO] root volumes not working
2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade
2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method
2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails
2044412 - Topology list misses separator lines and hover effect let the list jump 1px
2044421 - Topology list does not allow selecting an application group anymore
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2044803 - Unify button text style on VM tabs
2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2045065 - Scheduled pod has nodeName changed
2045073 - Bump golang and build images for local-storage-operator
2045087 - Failed to apply sriov policy on intel nics
2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade
2045559 - API_VIP moved when kube-api container on another master node was stopped
2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation
2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2046133 - [MAPO]IPI proxy installation failed
2046156 - Network policy: preview of affected pods for non-admin shows empty popup
2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config
2046191 - Opeartor pod is missing correct qosClass and priorityClass
2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource
2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob".
2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow
2046496 - Awkward wrapping of project toolbar on mobile
2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests
2046498 - "All Projects" and "all applications" use different casing on topology page
2046591 - Auto-update boot source is not available while create new template from it
2046594 - "Requested template could not be found" while creating VM from user-created template
2046598 - Auto-update boot source size unit is byte on customize wizard
2046601 - Cannot create VM from template
2046618 - Start last run action should contain current user name in the started-by annotation of the PLR
2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator
2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module
2047257 - [CP MIGRATION] Node drain failure during control plane node migration
2047277 - Storage status is missing from status card of virtualization overview
2047308 - Remove metrics and events for master port offsets
2047310 - Running VMs per template card needs empty state when no VMs exist
2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047362 - Removing prometheus UI access breaks origin test
2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message.
2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error
2047732 - [IBM]Volume is not deleted after destroy cluster
2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource
2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9
2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController
2047895 - Fix architecture naming in oc adm release mirror for aarch64
2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters
2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot
2047935 - [4.11] Bootimage bump tracker
2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-
2048059 - Service Level Agreement (SLA) always show 'Unknown'
2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false
2048186 - Image registry operator panics when finalizes config deletion
2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2048221 - Capitalization of titles in the VM details page is inconsistent.
2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI.
2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh
2048333 - prometheus-adapter becomes inaccessible during rollout
2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable
2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption
2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy
2048538 - Network policies are not implemented or updated by OVN-Kubernetes
2048541 - incorrect rbac check for install operator quick starts
2048563 - Leader election conventions for cluster topology
2048575 - IP reconciler cron job failing on single node
2048686 - Check MAC address provided on the install-config.yaml file
2048687 - All bare metal jobs are failing now due to End of Life of centos 8
2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr
2048803 - CRI-O seccomp profile out of date
2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added
2048955 - Alibaba Disk CSI Driver does not have CI
2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2049078 - Bond CNI: Failed to attach Bond NAD to pod
2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available'
2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2049142 - Missing "app" label
2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured
2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2049410 - external-dns-operator creates provider section, even when not requested
2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2049613 - MTU migration on SDN IPv4 causes API alerts
2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist
2049687 - superfluous apirequestcount entries in audit log
2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled
2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges
2049889 - oc new-app --search nodejs warns about access to sample content on quay.io
2050005 - Plugin module IDs can clash with console module IDs causing runtime errors
2050011 - Observe > Metrics page: Timespan text input and dropdown do not align
2050120 - Missing metrics in kube-state-metrics
2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050300 - panic in cluster-storage-operator while updating status
2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims
2050335 - azure-disk failed to mount with error special device does not exist
2050345 - alert data for burn budget needs to be updated to prevent regression
2050407 - revert "force cert rotation every couple days for development" in 4.11
2050409 - ip-reconcile job is failing consistently
2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest
2050466 - machine config update with invalid container runtime config should be more robust
2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour
2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes
2050707 - up test for prometheus pod look to far in the past
2050767 - Vsphere upi tries to access vsphere during manifests generation phase
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050882 - Crio appears to be coredumping in some scenarios
2050902 - not all resources created during import have common labels
2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error
2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11
2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted.
2051377 - Unable to switch vfio-pci to netdevice in policy
2051378 - Template wizard is crashed when there are no templates existing
2051423 - migrate loadbalancers from amphora to ovn not working
2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down
2051470 - prometheus: Add validations for relabel configs
2051558 - RoleBinding in project without subject is causing "Project access" page to fail
2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page
2051583 - sriov must-gather image doesn't work
2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2051611 - Remove Check which enforces summary_interval must match logSyncInterval
2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release
2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation
2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s
2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid
2051954 - Allow changing of policyAuditConfig ratelimit post-deployment
2051969 - Need to build local-storage-operator-metadata-container image for 4.11
2051985 - An APIRequestCount without dots in the name can cause a panic
2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8
2052055 - Whereabouts should implement client-go 1.22+
2052056 - Static pod installer should throttle creating new revisions
2052071 - local storage operator metrics target down after upgrade
2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos
2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade
2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters
2052415 - Pod density test causing problems when using kube-burner
2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work.
2052595 - Remove dev preview badge from IBM FlashSystem deployment windows
2052618 - Node reboot causes duplicate persistent volumes
2052671 - Add Sprint 214 translations
2052674 - Remove extra spaces
2052700 - kube-controller-manger should use configmap lease
2052701 - kube-scheduler should use configmap lease
2052814 - go fmt fails in OSM after migration to go 1.17
2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker
2052953 - Observe dashboard always opens for last viewed workload instead of the selected one
2052956 - Installing virtualization operator duplicates the first action on workloads in topology
2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26
2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds"
2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13 to vmx-15
2053112 - nncp status is unknown when nnce is Progressing
2053118 - nncp Available condition reason should be exposed in oc get
2053168 - Ensure the core dynamic plugin SDK package has correct types and code
2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time
2053304 - Debug terminal no longer works in admin console
2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053334 - rhel worker scaleup playbook failed because missing some dependency of podman
2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down
2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update
2053501 - Git import detection does not happen for private repositories
2053582 - inability to detect static lifecycle failure
2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization
2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated
2053622 - PDB warning alert when CR replica count is set to zero
2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)
2053721 - When using RootDeviceHint rotational setting the host can fail to provision
2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids
2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition
2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet
2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer
2054238 - console-master-e2e-gcp-console is broken
2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal
2054319 - must-gather | gather_metallb_logs can't detect metallb pod
2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work
2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13
2054564 - DPU network operator 4.10 branch need to sync with master
2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page
2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4
2054701 - [MAPO] Events are not created for MAPO machines
2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state
2054735 - Bad link in CNV console
2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress
2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions
2054950 - A large number is showing on disk size field
2055305 - Thanos Querier high CPU and memory usage till OOM
2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition
2055433 - Unable to create br-ex as gateway is not found
2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2055492 - The default YAML on vm wizard is not latest
2055601 - installer did not destroy .app dns recored in a IPI on ASH install
2055702 - Enable Serverless tests in CI
2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set.
2055729 - NodePerfCheck fires and stays active on momentary high latency
2055814 - Custom dynamic exntension point causes runtime and compile time error
2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status
2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions
2056454 - Implement preallocated disks for oVirt in the cluster API provider
2056460 - Implement preallocated disks for oVirt in the OCP installer
2056496 - If image does not exists for builder image then upload jar form crashes
2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies
2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters
2056752 - Better to named the oc-mirror version info with more information like the oc version --client
2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect
2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed
2056893 - incorrect warning for --to-image in oc adm upgrade help
2056967 - MetalLB: speaker metrics is not updated when deleting a service
2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high
2057054 - SDK: k8s methods resolves into Response instead of the Resource
2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
2057101 - oc commands working with images print an incorrect and inappropriate warning
2057160 - configure-ovs selects wrong interface on reboot
2057183 - OperatorHub: Missing "valid subscriptions" filter
2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled
2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle
2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion
2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring"
2057495 - Alibaba Disk CSI driver does not provision small PVCs
2057558 - Marketplace operator polls too frequently for cluster operator status changes
2057633 - oc rsync reports misleading error when container is not found
2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug
2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members
2057696 - Removing console still blocks OCP install from completing
2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used
2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper
2057967 - KubeJobCompletion does not account for possible job states
2057990 - Add extra debug information to image signature workflow test
2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information
2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain
2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused
2058225 - openshift_csi_share_ metrics are not found from telemeter server
2058282 - Websockets stop updating during cluster upgrades
2058291 - CI builds should have correct version of Kube without needing to push tags everytime
2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable
2058370 - e2e-aws-driver-toolkit CI job is failing
2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2058424 - ConsolePlugin proxy always passes Authorization header even if authorize property is omitted or false
2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created
2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root"
2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff
2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found"
2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden
2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa
2059213 - ART cannot build installer images due to missing terraform binaries for some architectures
2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)
2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect
2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override
2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages
2059654 - Dynamic demo plugin proxy example out of date
2059674 - Demo plugin fails to build
2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update
2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually
2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager
2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo
2060037 - Configure logging level of FRR containers
2060083 - CMO doesn't react to changes in clusteroperator console
2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset
2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found
2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time
2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node
2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology
2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group
2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions
2060406 - Test 'operators should not create watch channels very often' fails
2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4
2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10
2060532 - LSO e2e tests are run against default image and namespace
2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip
2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!
2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
2060583 - Remove Console internal-kubevirt plugin SDK package
2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060617 - IBMCloud destroy DNS regex not strict enough
2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus'
2060697 - [AWS] partitionNumber cannot work for specifying Partition number
2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section
2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field
2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page
2060924 - Console white-screens while using debug terminal
2060968 - Installation failing due to ironic-agent.service not starting properly
2060970 - Bump recommended FCOS to 35.20220213.3.0
2061002 - Conntrack entry is not removed for LoadBalancer IP
2061301 - Traffic Splitting Dialog is Confusing With Only One Revision
2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum
2061304 - workload info gatherer - don't serialize empty images map
2061333 - White screen for Pipeline builder page
2061447 - [GSS] local pv's are in terminating state
2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string
2061527 - [IBMCloud] infrastructure asset missing CloudProviderType
2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type
2061549 - AzureStack install with internal publishing does not create api DNS record
2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code
2061732 - Cinder CSI crashes when API is not available
2061755 - Missing breadcrumb on the resource creation page
2061833 - A single worker can be assigned to multiple baremetal hosts
2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer
2061916 - mixed ingress and egress policies can result in half-isolated pods
2061918 - Topology Sidepanel style is broken
2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet
2062007 - MCC bootstrap command lacks template flag
2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist
2062151 - Add RBAC for 'infrastructures' to operator bundle
2062355 - kubernetes-nmstate resources and logs not included in must-gathers
2062459 - Ingress pods scheduled on the same node
2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref
2062558 - Egress IP with openshift sdn in not functional on worker node.
2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload
2062645 - configure-ovs: don't restart networking if not necessary
2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric
2062849 - hw event proxy is not binding on ipv6 local address
2062920 - Project selector is too tall with only a few projects
2062998 - AWS GovCloud regions are recognized as the unknown regions
2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator
2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod
2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available
2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster
2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster
2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments
2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met
2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes
2063699 - Builds - Builds - Logs: i18n misses.
2063708 - Builds - Builds - Logs: translation correction needed.
2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)
2063732 - Workloads - StatefulSets : I18n misses
2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI
2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language.
2063756 - User Preferences - Applications - Insecure traffic : i18n misses
2063795 - Remove go-ovirt-client go.mod replace directive
2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided"
2063831 - etcd quorum pods landing on same node
2063897 - Community tasks not shown in pipeline builder page
2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server
2063938 - sing the hard coded rest-mapper in library-go
2063955 - cannot download operator catalogs due to missing images
2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language
2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod
2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain
2064239 - Virtualization Overview page turns into blank page
2064256 - The Knative traffic distribution doesn't update percentage in sidebar
2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation
2064596 - Fix the hubUrl docs link in pipeline quicksearch modal
2064607 - Pipeline builder makes too many (100+) API calls upfront
2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator
2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064705 - the alertmanagerconfig validation catches the wrong value for invalid field
2064744 - Errors trying to use the Debug Container feature
2064984 - Update error message for label limits
2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL
2065160 - Possible leak of load balancer targets on AWS Machine API Provider
2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted
2065290 - CVE-2021-23648 sanitize-url: XSS
2065338 - VolumeSnapshot creation date sorting is broken
2065507 - oc adm upgrade should return ReleaseAccepted condition to show upgrade status.
2065510 - [AWS] failed to create cluster on ap-southeast-3
2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places
2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors
2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error
2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap
2065597 - Cinder CSI is not configurable
2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics
2065689 - Internal Image registry with GCS backend does not redirect client
2065749 - Kubelet slowly leaking memory and pods eventually unable to start
2065785 - ip-reconciler job does not complete, halts node drain
2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204
2065806 - stop considering Mint mode as supported on Azure
2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console
2065893 - [4.11] Bootimage bump tracker
2066009 - CVE-2021-44906 minimist: prototype pollution
2066232 - e2e-aws-workers-rhel8 is failing on ansible check
2066418 - [4.11] Update channels information link is taking to a 404 error page
2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names
2066457 - Prometheus CI failure: 503 Service Unavailable
2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified
2066605 - coredns template block matches cluster API to loose
2066615 - Downstream OSDK still use upstream image for Hybird type operator
2066619 - The GitCommit of the oc-mirror version is not correct
2066665 - [ibm-vpc-block] Unable to change default storage class
2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2066754 - Cypress reports for core tests are not captured
2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
2066886 - openshift-apiserver pods never going NotReady
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066923 - No rule to make target 'docker-push' when building the SRO bundle
2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK
2067004 - CMO contains grafana image though grafana is removed
2067005 - Prometheus rule contains grafana though grafana is removed
2067062 - should update prometheus-operator resources version
2067064 - RoleBinding in Developer Console is dropping all subjects when editing
2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole
2067180 - Missing i18n translations
2067298 - Console 4.10 operand form refresh
2067312 - PPT event source is lost when received by the consumer
2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25
2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25
2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling
2068115 - resource tab extension fails to show up
2068148 - [4.11] /etc/redhat-release symlink is broken
2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator
2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab
2068490 - OLM descriptors integration test failing
2068538 - Crashloop back-off popover visual spacing defects
2068601 - Potential etcd inconsistent revision and data occurs
2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs
2068908 - Manual blog link change needed
2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35
2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state
2069181 - Disabling community tasks is not working
2069198 - Flaky CI test in e2e/pipeline-ci
2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog
2069312 - extend rest mappings with 'job' definition
2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services
2069577 - ConsolePlugin example proxy authorize is wrong
2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes
2069632 - Not able to download previous container logs from console
2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap
2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor, os and workload
2069685 - UI crashes on load if a pinned resource model does not exist
2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway"
2069740 - On-prem loadbalancer ports conflict with kube node port range
2069760 - In developer perspective divider does not show up in navigation
2069904 - Sync upstream 1.18.1 downstream
2069914 - Application Launcher groupings are not case-sensitive
2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2070000 - Add warning alerts for installing standalone k8s-nmstate
2070020 - InContext doesn't work for Event Sources
2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured
2070160 - Copy-to-clipboard and
elements cause display issues for ACM dynamic plugins 2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's 2070181 - [MAPO] serverGroupName ignored 2070457 - Image vulnerability Popover overflows from the visible area 2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes 2070703 - some ipv6 network policy tests consistently failing 2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears 2070731 - details switch label is not clickable on add page 2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled 2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability 2070805 - ClusterVersion: could not download the update 2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update 2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled 2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci 2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes 2071019 - rebase vsphere csi driver 2.5 2071021 - vsphere driver has snapshot support missing 2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong 2071139 - Ingress pods scheduled on the same node 2071364 - All image building tests are broken with " error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax 2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC) 2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console 2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType 2071617 - remove Kubevirt extensions in favour of dynamic plugin 2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO 2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs 2071700 - v1 events show "Generated from" message without the source/reporting component 2071715 - Shows 404 on Environment nav in Developer console 2071719 - OCP Console global PatternFly overrides link button whitespace 2071747 - Link to documentation from the overview page goes to a missing link 2071761 - Translation Keys Are Not Namespaced 2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable 2071859 - ovn-kube pods spec.dnsPolicy should be Default 2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name "" 2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates 2072106 - cluster-ingress-operator tests do not build on go 1.18 2072134 - Routes are not accessible within cluster from hostnet pods 2072139 - vsphere driver has permissions to create/update PV objects 2072154 - Secondary Scheduler operator panics 2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails 2072195 - machine api doesn't issue client cert when AWS DNS suffix missing 2072215 - Whereabouts ip-reconciler should be opt-in and not required 2072389 - CVO exits upgrade immediately rather than waiting for etcd backup 2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes 2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml 2072570 - The namespace titles for operator-install-single-namespace test keep changing 2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed) 2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master 2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node 2072793 - Drop "Used Filesystem" from "Virtualization -> Overview" 2072805 - Observe > Dashboards: $__range variables cause PromQL query errors 2072807 - Observe > Dashboards: Missingpanel.stylesattribute for table panels causes JS error 2072842 - (release-4.11) Gather namespace names with overlapping UID ranges 2072883 - sometimes monitoring dashboards charts can not be loaded successfully 2072891 - Update gcp-pd-csi-driver to 1.5.1; 2072911 - panic observed in kubedescheduler operator 2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial 2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system 2072998 - update aws-efs-csi-driver to the latest version 2072999 - Navigate from logs of selected Tekton task instead of last one 2073021 - [vsphere] Failed to update OS on master nodes 2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 2073176 - removing data in form does not remove data from yaml editor 2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists 2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 2073373 - Update azure-disk-csi-driver to 1.16.0 2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig 2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning 2073436 - Update azure-file-csi-driver to v1.14.0 2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls 2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add) 2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 2073522 - Update ibm-vpc-block-csi-driver to v4.2.0 2073525 - Update vpc-node-label-updater to v4.1.2 2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled 2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW 2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses 2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies 2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring 2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet 2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary 2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well 2074084 - CMO metrics not visible in the OCP webconsole UI 2074100 - CRD filtering according to name broken 2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions 2074237 - oc new-app --image-stream flag behavior is unclear 2074243 - DefaultPlacement API allow empty enum value and remove default 2074447 - cluster-dashboard: CPU Utilisation iowait and steal 2074465 - PipelineRun fails in import from Git flow if "main" branch is default 2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled 2074475 - [e2e][automation] kubevirt plugin cypress tests fail 2074483 - coreos-installer doesnt work on Dell machines 2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes 2074585 - MCG standalone deployment page goes blank when the KMS option is enabled 2074606 - occm does not have permissions to annotate SVC objects 2074612 - Operator fails to install due to service name lookup failure 2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system 2074635 - Unable to start Web Terminal after deleting existing instance 2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records 2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver 2074710 - Transition to go-ovirt-client 2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab 2074767 - Metrics page show incorrect values due to metrics level config 2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in 2074902 -oc debug node/nodename ? chroot /host somecommandshould exit with non-zero when the sub-command failed 2075015 - etcd-guard connection refused event repeating pathologically (payload blocking) 2075024 - Metal upgrades permafailing on metal3 containers crash looping 2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP 2075091 - Symptom Detection.Undiagnosed panic detected in pod 2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row) 2075149 - Trigger Translations When Extensions Are Updated 2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors 2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured 2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work 2075478 - Bump documentationBaseURL to 4.11 2075491 - nmstate operator cannot be upgraded on SNO 2075575 - Local Dev Env - Prometheus 404 Call errors spam the console 2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled 2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow 2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade 2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties 2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects 2075778 - Fix failing TestGetRegistrySamples test 2075873 - Bump recommended FCOS to 35.20220327.3.0 2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect 2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs 2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object 2076290 - PTP operator readme missing documentation on BC setup via PTP config 2076297 - Router process ignores shutdown signal while starting up 2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable 2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap 2076393 - [VSphere] survey fails to list datacenters 2076521 - Nodes in the same zone are not updated in the right order 2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast' 2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10 2076553 - Project access view replace group ref with user ref when updating their Role 2076614 - Missing Events component from the SDK API 2076637 - Configure metrics for vsphere driver to be reported 2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters 2076793 - CVO exits upgrade immediately rather than waiting for etcd backup 2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours 2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26 2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it 2076975 - Metric unset during static route conversion in configure-ovs.sh 2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI 2077050 - OCP should default to pd-ssd disk type on GCP 2077150 - Breadcrumbs on a few screens don't have correct top margin spacing 2077160 - Update owners for openshift/cluster-etcd-operator 2077357 - [release-4.11] 200ms packet delay with OVN controller turn on 2077373 - Accessibility warning on developer perspective 2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge) 2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager" 2077497 - Rebase etcd to 3.5.3 or later 2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API 2077599 - OCP should alert users if they are on vsphere version <7.0.2 2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster 2077797 - LSO pods don't have any resource requests 2077851 - "make vendor" target is not working 2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays 2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region 2078013 - drop multipathd.socket workaround 2078375 - When using the wizard with template using data source the resulting vm use pvc source 2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label 2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema: ERROR fork/exec 2078526 - Multicast breaks after master node reboot/sync 2078573 - SDN CNI -Fail to create nncp when vxlan is up 2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 2078698 - search box may not completely remove content 2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun) 2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 2078781 - PreflightValidation does not handle multiarch images 2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress 2078875 - OpenShift Installer fail to remove Neutron ports 2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml 2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema" 2078945 - Ensure only one apiserver-watcher process is active on a node. 2078954 - network-metrics-daemon makes costly global pod list calls scaling per node 2078969 - Avoid update races between old and new NTO operands during cluster upgrades 2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned 2079062 - Test for console demo plugin toast notification needs to be increased for ci testing 2079197 - [RFE] alert when more than one default storage class is detected 2079216 - Partial cluster update reference doc link returns 404 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity 2079315 - (release-4.11) Gather ODF config data with Insights 2079422 - Deprecated 1.25 API call 2079439 - OVN Pods Assigned Same IP Simultaneously 2079468 - Enhance the waitForIngressControllerCondition for better CI results 2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster 2079610 - Opeatorhub status shows errors 2079663 - change default image features in RBD storageclass 2079673 - Add flags to disable migrated code 2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config 2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster 2079788 - Operator restarts while applying the acm-ice example 2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade 2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade 2079805 - Secondary scheduler operator should comply to restricted pod security level 2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding 2079837 - [RFE] Hub/Spoke example with daemonset 2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation 2079845 - The Event Sinks catalog page now has a blank space on the left 2079869 - Builds for multiple kernel versions should be ran in parallel when possible 2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices 2079961 - The search results accordion has no spacing between it and the side navigation bar. 2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s] 2080054 - TAGS arg for installer-artifacts images is not propagated to build images 2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status 2080197 - etcd leader changes produce test churn during early stage of test 2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build 2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080379 - Group all e2e tests as parallel or serial 2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application 2080416 - oc bash-completion problem 2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load 2080446 - Sync ironic images with latest bug fixes packages 2080679 - [rebase v1.24] [sig-cli] test failure 2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel] 2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing 2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously 2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod" 2080976 - Avoid hooks config maps when hooks are empty 2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel] 2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available 2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources 2081062 - Unrevert RHCOS back to 8.6 2081067 - admin dev-console /settings/cluster should point out history may be excerpted 2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network 2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error 2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed 2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount 2081119 -oc explainoutput of default overlaySize is outdated 2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects 2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames 2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field 2081562 - lifecycle.posStart hook does not have network connectivity. 2081685 - Typo in NNCE Conditions 2081743 - [e2e] tests failing 2081788 - MetalLB: the crds are not validated until metallb is deployed 2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM 2081895 - Use the managed resource (and not the manifest) for resource health checks 2081997 - disconnected insights operator remains degraded after editing pull secret 2082075 - Removing huge amount of ports takes a lot of time. 2082235 - CNO exposes a generic apiserver that apparently does nothing 2082283 - Transition to new oVirt Terraform provider 2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni 2082380 - [4.10.z] customize wizard is crashed 2082403 - [LSO] No new build local-storage-operator-metadata-container created 2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully 2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS 2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys" 2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml 2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform 2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return 2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging 2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset 2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument 2082763 - Cluster install stuck on the applying for operatorhub "cluster" 2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal 2083153 - Unable to use application credentials for Manila PVC creation on OpenStack 2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters 2083219 - DPU network operator doesn't deal with c1... inteface names 2083237 - [vsphere-ipi] Machineset scale up process delay 2083299 - SRO does not fetch mirrored DTK images in disconnected clusters 2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified 2083451 - Update external serivces URLs to console.redhat.com 2083459 - Make numvfs > totalvfs error message more verbose 2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error 2083514 - Operator ignores managementState Removed 2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service 2083756 - Linkify not upgradeable message on ClusterSettings page 2083770 - Release image signature manifest filename extension is yaml 2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities 2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors 2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form 2083999 - "--prune-over-size-limit" is not working as expected 2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11 2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface 2084124 - The Update cluster modal includes a broken link 2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run 2084280 - GCP API Checks Fail if non-required APIs are not enabled 2084288 - "alert/Watchdog must have no gaps or changes" failing after bump 2084292 - Access to dashboard resources is needed in dynamic plugin SDK 2084331 - Resource with multiple capabilities included unless all capabilities are disabled 2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data 2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster 2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri 2084463 - 5 control plane replica tests fail on ephemeral volumes 2084539 - update azure arm templates to support customer provided vnet 2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail 2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character 2084615 - Add to navigation option on search page is not properly aligned 2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass 2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10 2085187 - installer-artifacts fails to build with go 1.18 2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse 2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated 2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster 2085407 - There is no Edit link/icon for labels on Node details page 2085721 - customization controller image name is wrong 2086056 - Missing doc for OVS HW offload 2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11 2086092 - update kube to v.24 2086143 - CNO uses too much memory 2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks 2086301 - kubernetes nmstate pods are not running after creating instance 2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment 2086417 - Pipeline created from add flow has GIT Revision as required field 2086437 - EgressQoS CRD not available 2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment 2086459 - oc adm inspect fails when one of resources not exist 2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long 2086465 - External identity providers should log login attempts in the audit trail 2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance' 2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase 2086505 - Update oauth-server images to be consistent with ART 2086519 - workloads must comply to restricted security policy 2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode 2086542 - Cannot create service binding through drag and drop 2086544 - ovn-k master daemonset on hypershift shouldn't log token 2086546 - Service binding connector is not visible in the dark mode 2086718 - PowerVS destroy code does not work 2086728 - [hypershift] Move drain to controller 2086731 - Vertical pod autoscaler operator needs a 4.11 bump 2086734 - Update csi driver images to be consistent with ART 2086737 - cloud-provider-openstack rebase to kubernetes v1.24 2086754 - Cluster resource override operator needs a 4.11 bump 2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory 2086791 - Azure: Validate UltraSSD instances in multi-zone regions 2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway 2086936 - vsphere ipi should use cores by default instead of sockets 2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert 2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel 2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror 2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified 2086972 - oc-mirror does not error invalid metadata is passed to the describe command 2086974 - oc-mirror does not work with headsonly for operator 4.8 2087024 - The oc-mirror result mapping.txt is not correct , can?t be used byoc image mirrorcommand 2087026 - DTK's imagestream is missing from OCP 4.11 payload 2087037 - Cluster Autoscaler should use K8s 1.24 dependencies 2087039 - Machine API components should use K8s 1.24 dependencies 2087042 - Cloud providers components should use K8s 1.24 dependencies 2087084 - remove unintentional nic support 2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update 2087114 - Add simple-procfs-kmod in modprobe example in README.md 2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization 2087556 - Failed to render DPU ovnk manifests 2087579 ---keep-manifest-list=truedoes not work foroc adm release new, only pick up the linux/amd64 manifest from the manifest list 2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler 2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile 2087764 - Rewrite the registry backend will hit error 2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again 2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services 2087942 - CNO references images that are divergent from ART 2087944 - KafkaSink Node visualized incorrectly 2087983 - remove etcd_perf before restore 2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log 2088130 - oc-mirror init does not allow for automated testing 2088161 - Match dockerfile image name with the name used in the release repo 2088248 - Create HANA VM does not use values from customized HANA templates 2088304 - ose-console: enable source containers for open source requirements 2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install 2088431 - AvoidBuggyIPs field of addresspool should be removed 2088483 - oc adm catalog mirror returns 0 even if there are errors 2088489 - Topology list does not allow selecting an application group anymore (again) 2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource 2088535 - MetalLB: Enable debug log level for downstream CI 2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warningswould violate PodSecurity "restricted:v1.24"2088561 - BMH unable to start inspection: File name too long 2088634 - oc-mirror does not fail when catalog is invalid 2088660 - Nutanix IPI installation inside container failed 2088663 - Better to change the default value of --max-per-registry to 6 2089163 - NMState CRD out of sync with code 2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster 2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting 2089254 - CAPI operator: Rotate token secret if its older than 30 minutes 2089276 - origin tests for egressIP and azure fail 2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix 2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths 2089334 - All cloud providers should use service account credentials 2089344 - Failed to deploy simple-kmod 2089350 - Rebase sdn to 1.24 2089387 - LSO not taking mpath. ignoring device 2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13 crashloops on machine-approver 2089396 - oc-mirror does not show pruned image plan 2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines 2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver 2089488 - Special resources are missing the managementState field 2089563 - Update Power VS MAPI to use api's from openshift/api repo 2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster 2089675 - Could not move Serverless Service without Revision (or while starting?) 2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster 2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks 2089687 - alert message of MCDDrainError needs to be updated for new drain controller 2089696 - CR reconciliation is stuck in daemonset lifecycle 2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply 2089719 - acm-simple-kmod fails to build 2089720 - [Hypershift] ICSP doesn't work for the guest cluster 2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive 2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages 2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances 2089805 - Config duration metrics aren't exposed 2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete 2089909 - PTP e2e testing not working on SNO cluster 2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist 2089930 - Bump OVN to 22.06 2089933 - Pods do not post readiness status on termination 2089968 - Multus CNI daemonset should use hostPath mounts with type: directory 2089973 - bump libs to k8s 1.24 for OCP 4.11 2089996 - Unnecessary yarn install runs in e2e tests 2090017 - Enable source containers to meet open source requirements 2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network 2090092 - Will hit error if specify the channel not the latest 2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready 2090178 - VM SSH command generated by UI points at api VIP 2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase 2090236 - Only reconcile annotations and status for clusters 2090266 - oc adm release extract is failing on mutli arch image 2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster 2090336 - Multus logging should be disabled prior to release 2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 2090358 - Initiating drain log message is displayed before the drain actually starts 2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials 2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z] 2090430 - gofmt code 2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool 2090437 - Bump CNO to k8s 1.24 2090465 - golang version mismatch 2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type 2090537 - failure in ovndb migration when db is not ready in HA mode 2090549 - dpu-network-operator shall be able to run on amd64 arch platform 2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD 2090627 - Git commit and branch are empty in MetalLB log 2090692 - Bump to latest 1.24 k8s release 2090730 - must-gather should include multus logs. 2090731 - nmstate deploys two instances of webhook on a single-node cluster 2090751 - oc image mirror skip-missing flag does not skip images 2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers 2090774 - Add Readme to plugin directory 2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert 2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs 2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition" 2090819 - oc-mirror does not catch invalid registry input when a namespace is specified 2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24 2090829 - Bump OpenShift router to k8s 1.24 2090838 - Flaky test: ignore flapping host interface 'tunbr' 2090843 - addLogicalPort() performance/scale optimizations 2090895 - Dynamic plugin nav extension "startsWith" property does not work 2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined 2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError 2091029 - Cancel rollout action only appears when rollout is completed 2091030 - Some BM may fail booting with default bootMode strategy 2091033 - [Descheduler]: provide ability to override included/excluded namespaces 2091087 - ODC Helm backend Owners file needs updates 2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091167 - IPsec runtime enabling not work in hypershift 2091218 - Update Dev Console Helm backend to use helm 3.9.0 2091433 - Update AWS instance types 2091542 - Error Loading/404 not found page shown after clicking "Current namespace only" 2091547 - Internet connection test with proxy permanently fails 2091567 - oVirt CSI driver should use latest go-ovirt-client 2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled 2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric 2091603 - WebSocket connection restarts when switching tabs in WebTerminal 2091613 - simple-kmod fails to build due to missing KVC 2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it 2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets" 2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec' 2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options 2091854 - clusteroperator status filter doesn't match all values in Status column 2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10 2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later 2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb 2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller 2092041 - Bump cluster-dns-operator to k8s 1.24 2092042 - Bump cluster-ingress-operator to k8s 1.24 2092047 - Kube 1.24 rebase for cloud-network-config-controller 2092137 - Search doesn't show all entries when name filter is cleared 2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16 2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results 2092408 - Wrong icon is used in the virtualization overview permissions card 2092414 - In virtualization overview "running vm per templates" template list can be improved 2092442 - Minimum time between drain retries is not the expected one 2092464 - marketplace catalog defaults to v4.10 2092473 - libovsdb performance backports 2092495 - ovn: use up to 4 northd threads in non-SNO clusters 2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins 2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster 2092579 - Don't retry pod deletion if objects are not existing 2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks 2092703 - Incorrect mount propagation information in container status 2092815 - can't delete the unwanted image from registry by oc-mirror 2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds 2092867 - make repository name unique in acm-ice/acm-simple-kmod examples 2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes 2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os 2092889 - Incorrect updating of EgressACLs using direction "from-lport" 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing 2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit 2093047 - Dynamic Plugins: Generated API markdown duplicatescheckAccessanduseAccessReviewdoc 2093126 - [4.11] Bootimage bump tracker 2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade 2093288 - Default catalogs fails liveness/readiness probes 2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable 2093368 - Installer orphans FIPs created for LoadBalancer Services oncluster destroy2093396 - Remove node-tainting for too-small MTU 2093445 - ManagementState reconciliation breaks SR 2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters 2093462 - Ingress Operator isn't reconciling the ingress cluster operator object 2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again 2093593 - Import from Devfile shows configuration options that shoudn't be there 2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding 2093600 - Project access tab should apply new permissions before it delete old ones 2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content) 2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24 2093797 - 'oc registry login' with serviceaccount function need update 2093819 - An etcd member for a new machine was never added to the cluster 2093930 - Gather console helm install totals metric 2093957 - Oc-mirror write dup metadata to registry backend 2093986 - Podsecurity violation error getting logged for pod-identity-webhook 2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig 2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips 2094039 - egressIP panics with nil pointer dereference 2094055 - Bump coreos-installer for s390x Secure Execution 2094071 - No runbook created for SouthboundStale alert 2094088 - Columns in NBDB may never be updated by OVNK 2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator 2094152 - Alerts in the virtualization overview status card aren't filtered 2094196 - Add default and validating webhooks for Power VS MAPI 2094227 - Topology: Create Service Binding should not be the last option (even under delete) 2094239 - custom pool Nodes with 0 nodes are always populated in progress bar 2094303 - If og is configured with sa, operator installation will be failed. 2094335 - [Nutanix] - debug logs are enabled by default in machine-controller 2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform 2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration 2094525 - Allow automatic upgrades for efs operator 2094532 - ovn-windows CI jobs are broken 2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run 2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character 2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s 2094801 - Kuryr controller keep restarting when handling IPs with leading zeros 2094806 - Machine API oVrit component should use K8s 1.24 dependencies 2094816 - Kuryr controller restarts when over quota 2094833 - Repository overview page does not show default PipelineRun template for developer user 2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state 2094864 - Rebase CAPG to latest changes 2094866 - oc-mirror does not always delete all manifests associated with an image during pruning 2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing 2094902 - Fix installer cross-compiling 2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters 2095049 - managed-csi StorageClass does not create PVs 2095071 - Backend tests fails after devfile registry update 2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh 2095110 - [ovn] northd container termination script must use bash 2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp 2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance 2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic 2095231 - Kafka Sink sidebar in topology is empty 2095247 - Event sink form doesn't show channel as sink until app is refreshed 2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node 2095256 - Samples Owner needs to be Updated 2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection' 2095362 - oVirt CSI driver operator should use latest go-ovirt-client 2095574 - e2e-agnostic CI job fails 2095687 - Debug Container shown for build logs and on click ui breaks 2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster 2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns 2095756 - CNO panics with concurrent map read/write 2095772 - Memory requests for ovnkube-master containers are over-sized 2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB 2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized 2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode 2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6 2096315 - NodeClockNotSynchronising alert's severity should be critical 2096350 - Web console doesn't display webhook errors for upgrades 2096352 - Collect whole journal in gather 2096380 - acm-simple-kmod references deprecated KVC example 2096392 - Topology node icons are not properly visible in Dark mode 2096394 - Add page Card items background color does not match with column background color in Dark mode 2096413 - br-ex not created due to default bond interface having a different mac address than expected 2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile 2096605 - [vsphere] no validation checking for diskType 2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups 2096855 -oc adm release newfailed with error when use an existing multi-arch release image as input 2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider 2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import 2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology 2097043 - No clean way to specify operand issues to KEDA OLM operator 2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries 2097067 - ClusterVersion history pruner does not always retain initial completed update entry 2097153 - poor performance on API call to vCenter ListTags with thousands of tags 2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects 2097239 - Change Lower CPU limits for Power VS cloud 2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support 2097260 - openshift-install create manifests failed for Power VS platform 2097276 - MetalLB CI deploys the operator via manifests and not using the csv 2097282 - chore: update external-provisioner to the latest upstream release 2097283 - chore: update external-snapshotter to the latest upstream release 2097284 - chore: update external-attacher to the latest upstream release 2097286 - chore: update node-driver-registrar to the latest upstream release 2097334 - oc plugin help shows 'kubectl' 2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11 2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook 2097454 - Placeholder bug for OCP 4.11.0 metadata release 2097503 - chore: rebase against latest external-resizer 2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading 2097607 - Add Power VS support to Webhooks tests in actuator e2e test 2097685 - Ironic-agent can't restart because of existing container 2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1 2097810 - Required Network tools missing for Testing e2e PTP 2097832 - clean up unused IPv6DualStackNoUpgrade feature gate 2097940 - openshift-install destroy cluster traps if vpcRegion not specified 2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing 2098172 - oc-mirror does not validatethe registry in the storage config 2098175 - invalid license in python-dataclasses-0.8-2.el8 spec 2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file 2098242 - typo in SRO specialresourcemodule 2098243 - Add error check to Platform create for Power VS 2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2098508 - Control-plane-machine-set-operator report panic 2098610 - No need to check the push permission with ?manifests-only option 2099293 - oVirt cluster API provider should use latest go-ovirt-client 2099330 - Edit application grouping is shown to user with view only access in a cluster 2099340 - CAPI e2e tests for AWS are missing 2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump 2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups 2099528 - Layout issue: No spacing in delete modals 2099561 - Prometheus returns HTTP 500 error on /favicon.ico 2099582 - Format and update Repository overview content 2099611 - Failures on etcd-operator watch channels 2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image 2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding) 2099668 - KubeControllerManager should degrade when GC stops working 2099695 - Update CAPG after rebase 2099751 - specialresourcemodule stacktrace while looping over build status 2099755 - EgressIP node's mgmtIP reachability configuration option 2099763 - Update icons for event sources and sinks in topology, Add page, and context menu 2099811 - UDP Packet loss in OpenShift using IPv6 [upcall] 2099821 - exporting a pointer for the loop variable 2099875 - The speaker won't start if there's another component on the host listening on 8080 2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing 2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file 2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster 2100001 - Sync upstream v1.22.0 downstream 2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator 2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment 2100038 - failure to update special-resource-lifecycle table during update Event 2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump 2100138 - release info --bugs has no differentiator between Jira and Bugzilla 2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation 2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar 2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied" 2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile 2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8 2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running 2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field 2100507 - Remove redundant log lines from obj_retry.go 2100536 - Update API to allow EgressIP node reachability check 2100601 - Update CNO to allow EgressIP node reachability check 2100643 - [Migration] [GCP]OVN can not rollback to SDN 2100644 - openshift-ansible FTBFS on RHEL8 2100669 - Telemetry should not log the full path if it contains a username 2100749 - [OCP 4.11] multipath support needs multipath modules 2100825 - Update machine-api-powervs go modules to latest version 2100841 - tiny openshift-install usability fix for setting KUBECONFIG 2101460 - An etcd member for a new machine was never added to the cluster 2101498 - Revert Bug 2082599: add upper bound to number of failed attempts 2102086 - The base image is still 4.10 for operator-sdk 1.22 2102302 - Dummy bug for 4.10 backports 2102362 - Valid regions should be allowed in GCP install config 2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster 2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption 2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install 2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root 2102947 - [VPA] recommender is logging errors for pods with init containers 2103053 - [4.11] Backport Prow CI improvements from master 2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly 2103080 - br-ex not created due to default bond interface having a different mac address than expected 2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path' 2103749 - MachineConfigPool is not getting updated 2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec 2104432 - [dpu-network-operator] Updating images to be consistent with ART 2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack 2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0 2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce 2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2104727 - Bootstrap node should honor http proxy 2104906 - Uninstall fails with Observed a panic: runtime.boundsError 2104951 - Web console doesn't display webhook errors for upgrades 2104991 - Completed pods may not be correctly cleaned up 2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds 2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied 2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history 2105167 - BuildConfig throws error when using a label with a / in it 2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial 2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator 2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18 2106051 - Unable to deploy acm-ice using latest SRO 4.11 build 2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0] 2106062 - [4.11] Bootimage bump tracker 2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc" 2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls 2106313 - bond-cni: backport bond-cni GA items to 4.11 2106543 - Typo in must-gather release-4.10 2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI 2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device 2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted 2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing 2107501 - metallb greenwave tests failure 2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found" 2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade 2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference 2108686 - rpm-ostreed: start limit hit easily 2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate 2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations 2111055 - dummy bug for 4.10.z bz2110938
- References:
https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. Bugs fixed (https://bugzilla.redhat.com/):
1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key 2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key 2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key 2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
This advisory contains the following OpenShift Virtualization 4.11.0 images:
RHEL-8-CNV-4.11 ==============hostpath-provisioner-container-v4.11.0-21 kubevirt-tekton-tasks-operator-container-v4.11.0-29 kubevirt-template-validator-container-v4.11.0-17 bridge-marker-container-v4.11.0-26 hostpath-csi-driver-container-v4.11.0-21 cluster-network-addons-operator-container-v4.11.0-26 ovs-cni-marker-container-v4.11.0-26 virtio-win-container-v4.11.0-16 ovs-cni-plugin-container-v4.11.0-26 kubemacpool-container-v4.11.0-26 hostpath-provisioner-operator-container-v4.11.0-24 cnv-containernetworking-plugins-container-v4.11.0-26 kubevirt-ssp-operator-container-v4.11.0-54 virt-cdi-uploadserver-container-v4.11.0-59 virt-cdi-cloner-container-v4.11.0-59 virt-cdi-operator-container-v4.11.0-59 virt-cdi-importer-container-v4.11.0-59 virt-cdi-uploadproxy-container-v4.11.0-59 virt-cdi-controller-container-v4.11.0-59 virt-cdi-apiserver-container-v4.11.0-59 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.0-7 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.0-7 kubevirt-tekton-tasks-copy-template-container-v4.11.0-7 checkup-framework-container-v4.11.0-67 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.0-7 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.0-7 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.0-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.0-7 vm-network-latency-checkup-container-v4.11.0-67 kubevirt-tekton-tasks-create-datavolume-container-v4.11.0-7 hyperconverged-cluster-webhook-container-v4.11.0-95 cnv-must-gather-container-v4.11.0-62 hyperconverged-cluster-operator-container-v4.11.0-95 kubevirt-console-plugin-container-v4.11.0-83 virt-controller-container-v4.11.0-105 virt-handler-container-v4.11.0-105 virt-operator-container-v4.11.0-105 virt-launcher-container-v4.11.0-105 virt-artifacts-server-container-v4.11.0-105 virt-api-container-v4.11.0-105 libguestfs-tools-container-v4.11.0-105 hco-bundle-registry-container-v4.11.0-587
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937609 - VM cannot be restarted
1945593 - Live migration should be blocked for VMs with host devices
1968514 - [RFE] Add cancel migration action to virtctl
1993109 - CNV MacOS Client not signed
1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side
2001385 - no "name" label in virt-operator pod
2009793 - KBase to clarify nested support status is missing
2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate
2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)
2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation
2026357 - Migration in sequence can be reported as failed even when it succeeded
2029349 - cluster-network-addons-operator does not serve metrics through HTTPS
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2031857 - Add annotation for URL to download the image
2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
2035344 - kubemacpool-mac-controller-manager not ready
2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered
2039976 - Pod stuck in "Terminating" state when removing VM with kernel boot and container disks
2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI
2041467 - [SSP] Support custom DataImportCron creating in custom namespaces
2042402 - LiveMigration with postcopy misbehave when failure occurs
2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists
2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?
2051899 - 4.11.0 containers
2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn't configure ip nat rules
2052466 - Event does not include reason for inability to live migrate
2052689 - Overhead Memory consumption calculations are incorrect
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056467 - virt-template-validator pods getting scheduled on the same node
2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long
2057310 - qemu-guest-agent does not report information due to selinux denials
2058149 - cluster-network-addons-operator deployment's MULTUS_IMAGE is pointing to brew image
2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs
2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state
2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool
2060585 - [SNO] Failed to find the virt-controller leader pod
2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled.
2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource
2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace
2063792 - No DataImportCron for CentOS 7
2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2064936 - Migration of vm from VMware reports pvc not large enough
2065014 - Feature Highlights in CNV 4.10 contains links to 4.7
2065019 - "Running VMs per template" in the new overview tab counts VMs that are not running
2066768 - [CNV-4.11-HCO] User Cannot List Resource "namespaces" in API group
2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom
2069287 - Two annotations for VM Template provider name
2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2070864 - non-privileged user cannot see catalog tiles
2071488 - "Migrate Node to Node" is confusing.
2071549 - [rhel-9] unable to create a non-root virt-launcher based VM
2071611 - Metrics documentation generators are missing metrics/recording rules
2071921 - Kubevirt RPM is not being built
2073669 - [rhel-9] VM fails to start
2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream
2073982 - [CNV-4.11-RHEL9] 'virtctl' binary fails with 'rc1' with 'virtctl version' command
2074337 - VM created from registry cannot be started
2075200 - VLAN filtering cannot be configured with Intel X710
2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff
2076292 - Upgrade from 4.10.1->4.11 using nightly channel, is not completing with error "could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR"
2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file
2076790 - Alert SSPDown is constantly in Firing state
2076908 - clicking on a template in the Running VMs per Template card leads to 404
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2078700 - Windows template boot source should be blank
2078703 - [RFE] Please hide the user defined password when customizing cloud-init
2078709 - VM conditions column have wrong key/values
2078728 - Common template rootDisk is not named correctly
2079366 - rootdisk is not able to edit
2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM
2079783 - Actions are broken in topology view
2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck
2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod
2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop
2080833 - Missing cloud init script editor in the scripts tab
2080835 - SSH key is set using cloud init script instead of new api
2081182 - VM SSH command generated by UI points at api VIP
2081202 - cloud-init for Windows VM generated with corrupted "undefined" section
2081409 - when viewing a common template details page, user need to see the message "can't edit common template" on all tabs
2081671 - SSH service created outside the UI is not discoverable
2081831 - [RFE] Improve disk hotplug UX
2082008 - LiveMigration fails due to loss of connection to destination host
2082164 - Migration progress timeout expects absolute progress
2082912 - [CNV-4.11] HCO Being Unable to Reconcile State
2083093 - VM overview tab is crashed
2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?
2083100 - Something keeps loading in the ?node selector? modal
2083101 - ?Restore default settings? never become available while editing CPU/Memory
2083135 - VM fails to schedule with vTPM in spec
2083256 - SSP Reconcile logging improvement when CR resources are changed
2083595 - [RFE] Disable VM descheduler if the VM is not live migratable
2084102 - [e2e] Many elements are lacking proper selector like 'data-test-id' or 'data-test'
2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails
2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field
2084431 - User credentials for ssh is not in correct format
2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab.
2091406 - wrong template namespace label when creating a vm with wizard
2091754 - Scheduling and scripts tab should be editable while the VM is running
2091755 - Change bottom "Save" to "Apply" on cloud-init script form
2091756 - The root disk of cloned template should be editable
2091758 - "OS" should be "Operating system" in template filter
2091760 - The provider should be empty if it's not set during cloning
2091761 - Miss "Edit labels" and "Edit annotations" in template kebab button
2091762 - Move notification above the tabs in template details page
2091764 - Clone a template should lead to the template details
2091765 - "Edit bootsource" is keeping in load in template actions dropdown
2091766 - "Are you sure you want to leave this page?" pops up when click the "Templates" link
2091853 - On Snapshot tab of single VM "Restore" button should move to the kebab actions together with the Delete
2091863 - BootSource edit modal should list affected templates
2091868 - Catalog list view has two columns named "BootSource"
2091889 - Devices should be editable for customize template
2091897 - username is missing in the generated ssh command
2091904 - VM is not started if adding "Authorized SSH Key" during vm creation
2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root
2091940 - SSH is not enabled in vm details after restart the VM
2091945 - delete a template should lead to templates list
2091946 - Add disk modal shows wrong units
2091982 - Got a lot of "Reconciler error" in cdi-deployment log after adding custom DataImportCron to hco
2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank
2092052 - Virtualization should be omitted in Calatog breadcrumbs
2092071 - Getting started card in Virtualization overview can not be hidden.
2092079 - Error message stays even when problematic field is dismissed
2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO
2092228 - Ensure Machine Type for new VMs is 8.6
2092230 - [RFE] Add indication/mark to deprecated template
2092306 - VM is stucking with WaitingForVolumeBinding if creating via "Boot from CD"
2092337 - os is empty in VM details page
2092359 - [e2e] data-test-id includes all pvc name
2092654 - [RFE] No obvious way to delete the ssh key from the VM
2092662 - No url example for rhel and windows template
2092663 - no hyperlink for URL example in disk source "url"
2092664 - no hyperlink to the cdi uploadproxy URL
2092781 - Details card should be removed for non admins.
2092783 - Top consumers' card should be removed for non admins.
2092787 - Operators links should be removed from Getting started card
2092789 - "Learn more about Operators" link should lead to the Red Hat documentation
2092951 - ?Edit BootSource? action should have more explicit information when disabled
2093282 - Remove links to 'all-namespaces/' for non-privileged user
2093691 - Creation flow drawer left padding is broken
2093713 - Required fields in creation flow should be highlighted if empty
2093715 - Optional parameters section in creation flow is missing bottom padding
2093716 - CPU|Memory modal button should say "Restore template settings?
2093772 - Add a service in environment it reminds a pending change in boot order
2093773 - Console crashed if adding a service without serial number
2093866 - Cannot create vm from the template vm-template-example
2093867 - OS for template 'vm-template-example' should matching the version of the image
2094202 - Cloud-init username field should have hint
2094207 - Cloud-init password field should have auto-generate option
2094208 - SSH key input is missing validation
2094217 - YAML view should reflect shanges in SSH form
2094222 - "?" icon should be placed after red asterisk in required fields
2094323 - Workload profile should be editable in template details page
2094405 - adding resource on enviornment isnt showing on disks list when vm is running
2094440 - Utilization pie charts figures are not based on current data
2094451 - PVC selection in VM creation flow does not work for non-priv user
2094453 - CD Source selection in VM creation flow is missing Upload option
2094465 - Typo in Source tooltip
2094471 - Node selector modal for non-privileged user
2094481 - Tolerations modal for non-privileged user
2094486 - Add affinity rule modal
2094491 - Affinity rules modal button
2094495 - Descheduler modal has same text in two lines
2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id
2094665 - Dedicated Resources modal for non-privileged user
2094678 - Secrets and ConfigMaps can't be added to Windows VM
2094727 - Creation flow should have VM info in header row
2094807 - hardware devices dropdown has group title even with no devices in cluster
2094813 - Cloudinit password is seen in wizard
2094848 - Details card on Overview page - 'View details' link is missing
2095125 - OS is empty in the clone modal
2095129 - "undefined" appears in rootdisk line in clone modal
2095224 - affinity modal for non-privileged users
2095529 - VM migration cancelation in kebab action should have shorter name
2095530 - Column sizes in VM list view
2095532 - Node column in VM list view is visible to non-privileged user
2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime
2095570 - Details tab of VM should not have Node info for non-privileged user
2095573 - Disks created as environment or scripts should have proper label
2095953 - VNC console controls layout
2095955 - VNC console tabs
2096166 - Template "vm-template-example" is binding with namespace "default"
2096206 - Inconsistent capitalization in Template Actions
2096208 - Templates in the catalog list is not sorted
2096263 - Incorrectly displaying units for Disks size or Memory field in various places
2096333 - virtualization overview, related operators title is not aligned
2096492 - Cannot create vm from a cloned template if its boot source is edited
2096502 - "Restore template settings" should be removed from template CPU editor
2096510 - VM can be created without any disk
2096511 - Template shows "no Boot Source" and label "Source available" at the same time
2096620 - in templates list, edit boot reference kebab action opens a modal with different title
2096781 - Remove boot source provider while edit boot source reference
2096801 - vnc thumbnail in virtual machine overview should be active on page load
2096845 - Windows template's scripts tab is crashed
2097328 - virtctl guestfs shouldn't required uid = 0
2097370 - missing titles for optional parameters in wizard customization page
2097465 - Count is not updating for 'prometheusrule' component when metrics kubevirt_hco_out_of_band_modifications_count executed
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2098134 - "Workload profile" column is not showing completely in template list
2098135 - Workload is not showing correct in catalog after change the template's workload
2098282 - Javascript error when changing boot source of custom template to be an uploaded file
2099443 - No "Quick create virtualmachine" button for template 'vm-template-example'
2099533 - ConsoleQuickStart for HCO CR's VM is missing
2099535 - The cdi-uploadproxy certificate url should be opened in a new tab
2099539 - No storage option for upload while editing a disk
2099566 - Cloudinit should be replaced by cloud-init in all places
2099608 - "DynamicB" shows in vm-example disk size
2099633 - Doc links needs to be updated
2099639 - Remove user line from the ssh command section
2099802 - Details card link shouldn't be hard-coded
2100054 - Windows VM with WSL2 guest fails to migrate
2100284 - Virtualization overview is crashed
2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101485 - Cloudinit should be replaced by cloud-init in all places
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer
2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2102122 - non-priv user cannot load dataSource while edit template's rootdisk
2102124 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2102125 - vm clone modal is displaying DV size instead of PVC size
2102127 - Cannot add NIC to VM template as non-priv user
2102129 - All templates are labeling "source available" in template list page
2102131 - The number of hardware devices is not correct in vm overview tab
2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2102143 - vm clone modal is displaying DV size instead of PVC size
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102543 - Add button moved to right
2102544 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102545 - VM filter has two "Other" checkboxes which are triggered together
2104617 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2106175 - All pages are crashed after visit Virtualization -> Overview
2106258 - All pages are crashed after visit Virtualization -> Overview
2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions
2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111562 - kubevirt plugin console crashed after visit vmi page
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202006-0222",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.0.1"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "gitlab",
"scope": "lt",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.1.2"
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "communications cloud native core policy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"model": "gitlab",
"scope": "gte",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.0.0"
},
{
"model": "gitlab",
"scope": "gte",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.1.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "gitlab",
"scope": "lt",
"trust": 1.0,
"vendor": "gitlab",
"version": "12.10.13"
},
{
"model": "gitlab",
"scope": "lt",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.0.8"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "pcre",
"scope": "lt",
"trust": 1.0,
"vendor": "pcre",
"version": "8.44"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:pcre:pcre:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.44",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.0.1",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:community:*:*:*",
"cpe_name": [],
"versionEndExcluding": "13.1.2",
"versionStartIncluding": "13.1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:enterprise:*:*:*",
"cpe_name": [],
"versionEndExcluding": "13.1.2",
"versionStartIncluding": "13.1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:community:*:*:*",
"cpe_name": [],
"versionEndExcluding": "13.0.8",
"versionStartIncluding": "13.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:enterprise:*:*:*",
"cpe_name": [],
"versionEndExcluding": "13.0.8",
"versionStartIncluding": "13.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:community:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.10.13",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:enterprise:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.10.13",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "168352"
},
{
"db": "PACKETSTORM",
"id": "168392"
}
],
"trust": 0.7
},
"cve": "CVE-2020-14155",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-167005",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "LOW",
"baseScore": 5.3,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2020-14155",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-167005",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "libpcre in PCRE before 8.44 allows an integer overflow via a large number after a (?C substring. PCRE is an open source regular expression library written in C language by Philip Hazel software developer. An input validation error vulnerability exists in libpcre in versions prior to PCRE 8.44. An attacker could exploit this vulnerability to execute arbitrary code or cause an application to crash on the system with a large number of requests. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-02-01-1 macOS Big Sur 11.2, Security Update 2021-001\nCatalina, Security Update 2021-001 Mojave\n\nmacOS Big Sur 11.2, Security Update 2021-001 Catalina, Security\nUpdate 2021-001 Mojave addresses the following issues. Information\nabout the security content is also available at\nhttps://support.apple.com/HT212147. \n\nAnalytics\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2021-1761: Cees Elzinga\n\nAPFS\nAvailable for: macOS Big Sur 11.0.1\nImpact: A local user may be able to read arbitrary files\nDescription: The issue was addressed with improved permissions logic. \nCVE-2021-1797: Thomas Tempelmann\n\nCFNetwork Cache\nAvailable for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2020-27945: Zhuo Liang of Qihoo 360 Vulcan Team\n\nCoreAnimation\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious application could execute arbitrary code leading\nto compromise of user information\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-1760: @S0rryMybad of 360 Vulcan Team\n\nCoreAudio\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-1747: JunDong Xie of Ant Security Light-Year Lab\n\nCoreGraphics\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-1776: Ivan Fratric of Google Project Zero\n\nCoreMedia\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1759: Hou JingYi (@hjy79425575) of Qihoo 360 CERT\n\nCoreText\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted text file may lead to\narbitrary code execution\nDescription: A stack overflow was addressed with improved input\nvalidation. \nCVE-2021-1772: Mickey Jin of Trend Micro working with Trend Micro\u2019s\nZero Day Initiative\n\nCoreText\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1792: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nCrash Reporter\nAvailable for: macOS Catalina 10.15.7\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2021-1761: Cees Elzinga\n\nCrash Reporter\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A local attacker may be able to elevate their privileges\nDescription: Multiple issues were addressed with improved logic. \nCVE-2021-1787: James Hutchins\n\nCrash Reporter\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A local user may be able to create or modify system files\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1786: Csaba Fitzl (@theevilbit) of Offensive Security\n\nDirectory Utility\nAvailable for: macOS Catalina 10.15.7\nImpact: A malicious application may be able to access private\ninformation\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-27937: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nEndpoint Security\nAvailable for: macOS Catalina 10.15.7\nImpact: A local attacker may be able to elevate their privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1802: Zhongcheng Li (@CK01) from WPS Security Response\nCenter\n\nFairPlay\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious application may be able to disclose kernel memory\nDescription: An out-of-bounds read issue existed that led to the\ndisclosure of kernel memory. This was addressed with improved input\nvalidation. \nCVE-2021-1791: Junzhi Lu (@pwn0rz), Qi Sun \u0026 Mickey Jin of Trend\nMicro working with Trend Micro\u2019s Zero Day Initiative\n\nFontParser\nAvailable for: macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted font may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1790: Peter Nguyen Vu Hoang of STAR Labs\n\nFontParser\nAvailable for: macOS Mojave 10.14.6\nImpact: Processing a maliciously crafted font may lead to arbitrary\ncode execution\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2021-1775: Mickey Jin and Qi Sun of Trend Micro\n\nFontParser\nAvailable for: macOS Mojave 10.14.6\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-29608: Xingwei Lin of Ant Security Light-Year Lab\n\nFontParser\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1758: Peter Nguyen of STAR Labs\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An access issue was addressed with improved memory\nmanagement. \nCVE-2021-1783: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1741: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1743: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative, Xingwei Lin of Ant Security Light-\nYear Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1773: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: An out-of-bounds read issue existed in the curl. This\nissue was addressed with improved bounds checking. \nCVE-2021-1778: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1736: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1785: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: This issue was addressed with improved checks. \nCVE-2021-1766: Danny Rosseau of Carve Systems\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1818: Xingwei Lin from Ant-Financial Light-Year Security Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-1742: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1746: Mickey Jin \u0026 Qi Sun of Trend Micro, Xingwei Lin of Ant\nSecurity Light-Year Lab\nCVE-2021-1754: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1774: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1777: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1793: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-1737: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1738: Lei Sun\nCVE-2021-1744: Xingwei Lin of Ant Security Light-Year Lab\n\nIOKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A logic error in kext loading was addressed with\nimproved state handling. \nCVE-2021-1779: Csaba Fitzl (@theevilbit) of Offensive Security\n\nIOSkywalkFamily\nAvailable for: macOS Big Sur 11.0.1\nImpact: A local attacker may be able to elevate their privileges\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1757: Pan ZhenPeng (@Peterpan0927) of Alibaba Security,\nProteas\n\nKernel\nAvailable for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A logic issue existed resulting in memory corruption. \nThis was addressed with improved state management. \nCVE-2020-27904: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong\nSecurity Lab\n\nKernel\nAvailable for: macOS Big Sur 11.0.1\nImpact: A remote attacker may be able to cause a denial of service\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-1764: @m00nbsd\n\nKernel\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A malicious application may be able to elevate privileges. \nApple is aware of a report that this issue may have been actively\nexploited. \nDescription: A race condition was addressed with improved locking. \nCVE-2021-1782: an anonymous researcher\n\nKernel\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: Multiple issues were addressed with improved logic. \nCVE-2021-1750: @0xalsr\n\nLogin Window\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: An attacker in a privileged network position may be able to\nbypass authentication policy\nDescription: An authentication issue was addressed with improved\nstate management. \nCVE-2020-29633: Jewel Lambert of Original Spin, LLC. \n\nMessages\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A user that is removed from an iMessage group could rejoin\nthe group\nDescription: This issue was addressed with improved checks. \nCVE-2021-1771: Shreyas Ranganatha (@strawsnoceans)\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-1762: Mickey Jin of Trend Micro\n\nModel I/O\nAvailable for: macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted file may lead to heap\ncorruption\nDescription: This issue was addressed with improved checks. \nCVE-2020-29614: ZhiWei Sun (@5n1p3r0010) from Topsec Alpha Lab\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2021-1763: Mickey Jin of Trend Micro working with Trend Micro\u2019s\nZero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted image may lead to heap\ncorruption\nDescription: This issue was addressed with improved checks. \nCVE-2021-1767: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1745: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1753: Mickey Jin of Trend Micro working with Trend Micro\u2019s\nZero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1768: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nNetFSFramework\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1751: Mikko Kentt\u00e4l\u00e4 (@Turmio_) of SensorFu\n\nOpenLDAP\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2020-25709\n\nPower Management\nAvailable for: macOS Mojave 10.14.6, macOS Catalina 10.15.7\nImpact: A malicious application may be able to elevate privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-27938: Tim Michaud (@TimGMichaud) of Leviathan\n\nScreen Sharing\nAvailable for: macOS Big Sur 11.0.1\nImpact: Multiple issues in pcre\nDescription: Multiple issues were addressed by updating to version\n8.44. \nCVE-2019-20838\nCVE-2020-14155\n\nSQLite\nAvailable for: macOS Catalina 10.15.7\nImpact: Multiple issues in SQLite\nDescription: Multiple issues were addressed by updating SQLite to\nversion 3.32.3. \nCVE-2020-15358\n\nSwift\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-1769: CodeColorist of Ant-Financial Light-Year Labs\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-1788: Francisco Alonso (@revskills)\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: Maliciously crafted web content may violate iframe sandboxing\npolicy\nDescription: This issue was addressed with improved iframe sandbox\nenforcement. \nCVE-2021-1765: Eliya Stein of Confiant\nCVE-2021-1801: Eliya Stein of Confiant\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-1789: @S0rryMybad of 360 Vulcan Team\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: A remote attacker may be able to cause arbitrary code\nexecution. Apple is aware of a report that this issue may have been\nactively exploited. \nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-1871: an anonymous researcher\nCVE-2021-1870: an anonymous researcher\n\nWebRTC\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious website may be able to access restricted ports on\narbitrary servers\nDescription: A port redirection issue was addressed with additional\nport validation. \nCVE-2021-1799: Gregory Vishnepolsky \u0026 Ben Seri of Armis Security, and\nSamy Kamkar\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Junzhi Lu (@pwn0rz), Mickey Jin \u0026 Jesse\nChange of Trend Micro for their assistance. \n\nlibpthread\nWe would like to acknowledge CodeColorist of Ant-Financial Light-Year\nLabs for their assistance. \n\nLogin Window\nWe would like to acknowledge Jose Moises Romero-Villanueva of\nCrySolve for their assistance. \n\nMail Drafts\nWe would like to acknowledge Jon Bottarini of HackerOne for their\nassistance. \n\nScreen Sharing Server\nWe would like to acknowledge @gorelics for their assistance. \n\nWebRTC\nWe would like to acknowledge Philipp Hancke for their assistance. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmAYgrkACgkQZcsbuWJ6\njjATvhAAmcspGY8ZHJcSUGr9mysz5iT9oGkZcvFa8kcJsFAvFb9Wjz0M2eovBXQc\nD9bD7LrUpodiqkSobB4bEevpD9P8E/T/eRSBxjomKLv5DKHPT4eh/K2EU6R6ubVi\nGGNlT9DJrIxcTJIB2y/yfs8msV2w2/gZDLKJZP4Zh6t8G1sjI17iEaxpOph67aq2\nX0d+P7+7q1mUBa47JEQ+HIUNlfHtBL825cnmHD2Vn1WELQLKZfXBl+nPM9l9naRc\n3vYIvR7xJ5c4bqFx7N9xwGdQ5TRIoDijqADwggGwOZEiVZ7PWifj/iCLUz4Ks4hr\noGVE1UxN1oSX63D44ZQyfiyIWIiMtDV9V4J6mUoUnZ6RTTMoRRAF9DcSVF5/wmHk\nodYnMeouHc543ZyVBtdtwJ/tbuBvTOjzpNn0+UgiyRL9wG/xxQq+gB4vwgSEviek\nbBhyvdxLVWW0ULwFeN5rI5bCQBkv6BB9OSyhD6sMRrp59NAgBBS2nstZG1RAt7XL\n2KZ1GpoNcuDRLj7ElxAfeJuPM1dFVTK48SH56M1FElz/QowZVOXyKgUoaeVTUyAC\n3WOACmFAosFIclCbr8z8yGynX2bsCGBNKv4pKoHlyZCyFHCQw9L6uR2gRkOp86+M\niqHtE2L1WUZvUMCIKxfdixILEfoacSVCxr3+v4SSDOcEbSDYEIA=\n=mUkG\n-----END PGP SIGNATURE-----\n\n\n\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. \n\nSecurity Fix(es):\n\n* nodejs-immer: prototype pollution may lead to DoS or remote code\nexecution (CVE-2021-3757)\n\n* mig-controller: incorrect namespaces handling may lead to not authorized\nusage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 10\npackages that are part of the JBoss Core Services offering. \nRefer to the Release Notes for information on the most significant bug\nfixes and enhancements included in this release. \n\nSecurity Fix(es):\n\n* httpd: Single zero byte stack overflow in mod_auth_digest\n(CVE-2020-35452)\n* httpd: mod_session NULL pointer dereference in parser (CVE-2021-26690)\n* httpd: Heap overflow in mod_session (CVE-2021-26691)\n* httpd: mod_proxy_wstunnel tunneling of non Upgraded connection\n(CVE-2019-17567)\n* httpd: MergeSlashes regression (CVE-2021-30641)\n* httpd: mod_proxy NULL pointer dereference (CVE-2020-13950)\n* jbcs-httpd24-openssl: openssl: NULL pointer dereference in\nX509_issuer_and_serial_hash() (CVE-2021-23841)\n* openssl: Read buffer overruns processing ASN.1 strings (CVE-2021-3712)\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n* pcre: buffer over-read in JIT when UTF is disabled (CVE-2019-20838)\n* pcre: integer overflow in libpcre (CVE-2020-14155)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1848436 - CVE-2020-14155 pcre: Integer overflow when parsing callout numeric arguments\n1848444 - CVE-2019-20838 pcre: Buffer over-read in JIT when UTF is disabled and \\X or \\R has fixed quantifier greater than 1\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1966724 - CVE-2020-35452 httpd: Single zero byte stack overflow in mod_auth_digest\n1966729 - CVE-2021-26690 httpd: mod_session: NULL pointer dereference when parsing Cookie header\n1966732 - CVE-2021-26691 httpd: mod_session: Heap overflow via a crafted SessionHeader value\n1966738 - CVE-2020-13950 httpd: mod_proxy NULL pointer dereference\n1966740 - CVE-2019-17567 httpd: mod_proxy_wstunnel tunneling of non Upgraded connection\n1966743 - CVE-2021-30641 httpd: Unexpected URL matching with \u0027MergeSlashes OFF\u0027\n1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings\n\n6. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Bugs fixed (https://bugzilla.redhat.com/):\n\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nTRACING-2235 - Release RHOSDT 2.1\n\n6. ==========================================================================\nUbuntu Security Notice USN-5425-1\nMay 17, 2022\n\npcre3 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in PCRE. \n\nSoftware Description:\n- pcre3: Perl 5 Compatible Regular Expression Library\n\nDetails:\n\nYunho Kim discovered that PCRE incorrectly handled memory when\nhandling certain regular expressions. An attacker could possibly use\nthis issue to cause applications using PCRE to expose sensitive\ninformation. This issue only affects Ubuntu 18.04 LTS,\nUbuntu 20.04 LTS, Ubuntu 21.10 and Ubuntu 22.04 LTS. (CVE-2019-20838)\n\nIt was discovered that PCRE incorrectly handled memory when\nhandling certain regular expressions. An attacker could possibly use\nthis issue to cause applications using PCRE to have unexpected\nbehavior. This issue only affects Ubuntu 14.04 ESM, Ubuntu 16.04 ESM,\nUbuntu 18.04 LTS and Ubuntu 20.04 LTS. (CVE-2020-14155)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libpcre3 2:8.39-13ubuntu0.22.04.1\n\nUbuntu 21.10:\n libpcre3 2:8.39-13ubuntu0.21.10.1\n\nUbuntu 20.04 LTS:\n libpcre3 2:8.39-12ubuntu0.1\n\nUbuntu 18.04 LTS:\n libpcre3 2:8.39-9ubuntu0.1\n\nUbuntu 16.04 ESM:\n libpcre3 2:8.38-3.1ubuntu0.1~esm1\n\nUbuntu 14.04 ESM:\n libpcre3 1:8.31-2ubuntu2.3+esm1\n\nAfter a standard system update you need to restart applications using PCRE,\nsuch as the Apache HTTP server and Nginx, to make all the necessary\nchanges. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1 and \" ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr \" , cluster unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] - key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \" error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema: ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13 crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use an existing multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. Bugs fixed (https://bugzilla.redhat.com/):\n\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key\n2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key\n2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key\n2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. \n\nThis advisory contains the following OpenShift Virtualization 4.11.0\nimages:\n\nRHEL-8-CNV-4.11\n==============hostpath-provisioner-container-v4.11.0-21\nkubevirt-tekton-tasks-operator-container-v4.11.0-29\nkubevirt-template-validator-container-v4.11.0-17\nbridge-marker-container-v4.11.0-26\nhostpath-csi-driver-container-v4.11.0-21\ncluster-network-addons-operator-container-v4.11.0-26\novs-cni-marker-container-v4.11.0-26\nvirtio-win-container-v4.11.0-16\novs-cni-plugin-container-v4.11.0-26\nkubemacpool-container-v4.11.0-26\nhostpath-provisioner-operator-container-v4.11.0-24\ncnv-containernetworking-plugins-container-v4.11.0-26\nkubevirt-ssp-operator-container-v4.11.0-54\nvirt-cdi-uploadserver-container-v4.11.0-59\nvirt-cdi-cloner-container-v4.11.0-59\nvirt-cdi-operator-container-v4.11.0-59\nvirt-cdi-importer-container-v4.11.0-59\nvirt-cdi-uploadproxy-container-v4.11.0-59\nvirt-cdi-controller-container-v4.11.0-59\nvirt-cdi-apiserver-container-v4.11.0-59\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.0-7\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.0-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.0-7\ncheckup-framework-container-v4.11.0-67\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.0-7\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.0-7\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.0-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.0-7\nvm-network-latency-checkup-container-v4.11.0-67\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.0-7\nhyperconverged-cluster-webhook-container-v4.11.0-95\ncnv-must-gather-container-v4.11.0-62\nhyperconverged-cluster-operator-container-v4.11.0-95\nkubevirt-console-plugin-container-v4.11.0-83\nvirt-controller-container-v4.11.0-105\nvirt-handler-container-v4.11.0-105\nvirt-operator-container-v4.11.0-105\nvirt-launcher-container-v4.11.0-105\nvirt-artifacts-server-container-v4.11.0-105\nvirt-api-container-v4.11.0-105\nlibguestfs-tools-container-v4.11.0-105\nhco-bundle-registry-container-v4.11.0-587\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937609 - VM cannot be restarted\n1945593 - Live migration should be blocked for VMs with host devices\n1968514 - [RFE] Add cancel migration action to virtctl\n1993109 - CNV MacOS Client not signed\n1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side\n2001385 - no \"name\" label in virt-operator pod\n2009793 - KBase to clarify nested support status is missing\n2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate\n2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)\n2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation\n2026357 - Migration in sequence can be reported as failed even when it succeeded\n2029349 - cluster-network-addons-operator does not serve metrics through HTTPS\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2031857 - Add annotation for URL to download the image\n2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2035344 - kubemacpool-mac-controller-manager not ready\n2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered\n2039976 - Pod stuck in \"Terminating\" state when removing VM with kernel boot and container disks\n2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI\n2041467 - [SSP] Support custom DataImportCron creating in custom namespaces\n2042402 - LiveMigration with postcopy misbehave when failure occurs\n2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists\n2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?\n2051899 - 4.11.0 containers\n2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn\u0027t configure ip nat rules\n2052466 - Event does not include reason for inability to live migrate\n2052689 - Overhead Memory consumption calculations are incorrect\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056467 - virt-template-validator pods getting scheduled on the same node\n2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long\n2057310 - qemu-guest-agent does not report information due to selinux denials\n2058149 - cluster-network-addons-operator deployment\u0027s MULTUS_IMAGE is pointing to brew image\n2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs\n2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state\n2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool\n2060585 - [SNO] Failed to find the virt-controller leader pod\n2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. \n2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource\n2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace\n2063792 - No DataImportCron for CentOS 7\n2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2064936 - Migration of vm from VMware reports pvc not large enough\n2065014 - Feature Highlights in CNV 4.10 contains links to 4.7\n2065019 - \"Running VMs per template\" in the new overview tab counts VMs that are not running\n2066768 - [CNV-4.11-HCO] User Cannot List Resource \"namespaces\" in API group\n2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom\n2069287 - Two annotations for VM Template provider name\n2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2070864 - non-privileged user cannot see catalog tiles\n2071488 - \"Migrate Node to Node\" is confusing. \n2071549 - [rhel-9] unable to create a non-root virt-launcher based VM\n2071611 - Metrics documentation generators are missing metrics/recording rules\n2071921 - Kubevirt RPM is not being built\n2073669 - [rhel-9] VM fails to start\n2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream\n2073982 - [CNV-4.11-RHEL9] \u0027virtctl\u0027 binary fails with \u0027rc1\u0027 with \u0027virtctl version\u0027 command\n2074337 - VM created from registry cannot be started\n2075200 - VLAN filtering cannot be configured with Intel X710\n2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff\n2076292 - Upgrade from 4.10.1-\u003e4.11 using nightly channel, is not completing with error \"could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR\"\n2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file\n2076790 - Alert SSPDown is constantly in Firing state\n2076908 - clicking on a template in the Running VMs per Template card leads to 404\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2078700 - Windows template boot source should be blank\n2078703 - [RFE] Please hide the user defined password when customizing cloud-init\n2078709 - VM conditions column have wrong key/values\n2078728 - Common template rootDisk is not named correctly\n2079366 - rootdisk is not able to edit\n2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM\n2079783 - Actions are broken in topology view\n2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck\n2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod\n2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop\n2080833 - Missing cloud init script editor in the scripts tab\n2080835 - SSH key is set using cloud init script instead of new api\n2081182 - VM SSH command generated by UI points at api VIP\n2081202 - cloud-init for Windows VM generated with corrupted \"undefined\" section\n2081409 - when viewing a common template details page, user need to see the message \"can\u0027t edit common template\" on all tabs\n2081671 - SSH service created outside the UI is not discoverable\n2081831 - [RFE] Improve disk hotplug UX\n2082008 - LiveMigration fails due to loss of connection to destination host\n2082164 - Migration progress timeout expects absolute progress\n2082912 - [CNV-4.11] HCO Being Unable to Reconcile State\n2083093 - VM overview tab is crashed\n2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?\n2083100 - Something keeps loading in the ?node selector? modal\n2083101 - ?Restore default settings? never become available while editing CPU/Memory\n2083135 - VM fails to schedule with vTPM in spec\n2083256 - SSP Reconcile logging improvement when CR resources are changed\n2083595 - [RFE] Disable VM descheduler if the VM is not live migratable\n2084102 - [e2e] Many elements are lacking proper selector like \u0027data-test-id\u0027 or \u0027data-test\u0027\n2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails\n2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field\n2084431 - User credentials for ssh is not in correct format\n2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. \n2091406 - wrong template namespace label when creating a vm with wizard\n2091754 - Scheduling and scripts tab should be editable while the VM is running\n2091755 - Change bottom \"Save\" to \"Apply\" on cloud-init script form\n2091756 - The root disk of cloned template should be editable\n2091758 - \"OS\" should be \"Operating system\" in template filter\n2091760 - The provider should be empty if it\u0027s not set during cloning\n2091761 - Miss \"Edit labels\" and \"Edit annotations\" in template kebab button\n2091762 - Move notification above the tabs in template details page\n2091764 - Clone a template should lead to the template details\n2091765 - \"Edit bootsource\" is keeping in load in template actions dropdown\n2091766 - \"Are you sure you want to leave this page?\" pops up when click the \"Templates\" link\n2091853 - On Snapshot tab of single VM \"Restore\" button should move to the kebab actions together with the Delete\n2091863 - BootSource edit modal should list affected templates\n2091868 - Catalog list view has two columns named \"BootSource\"\n2091889 - Devices should be editable for customize template\n2091897 - username is missing in the generated ssh command\n2091904 - VM is not started if adding \"Authorized SSH Key\" during vm creation\n2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root\n2091940 - SSH is not enabled in vm details after restart the VM\n2091945 - delete a template should lead to templates list\n2091946 - Add disk modal shows wrong units\n2091982 - Got a lot of \"Reconciler error\" in cdi-deployment log after adding custom DataImportCron to hco\n2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank\n2092052 - Virtualization should be omitted in Calatog breadcrumbs\n2092071 - Getting started card in Virtualization overview can not be hidden. \n2092079 - Error message stays even when problematic field is dismissed\n2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO\n2092228 - Ensure Machine Type for new VMs is 8.6\n2092230 - [RFE] Add indication/mark to deprecated template\n2092306 - VM is stucking with WaitingForVolumeBinding if creating via \"Boot from CD\"\n2092337 - os is empty in VM details page\n2092359 - [e2e] data-test-id includes all pvc name\n2092654 - [RFE] No obvious way to delete the ssh key from the VM\n2092662 - No url example for rhel and windows template\n2092663 - no hyperlink for URL example in disk source \"url\"\n2092664 - no hyperlink to the cdi uploadproxy URL\n2092781 - Details card should be removed for non admins. \n2092783 - Top consumers\u0027 card should be removed for non admins. \n2092787 - Operators links should be removed from Getting started card\n2092789 - \"Learn more about Operators\" link should lead to the Red Hat documentation\n2092951 - ?Edit BootSource? action should have more explicit information when disabled\n2093282 - Remove links to \u0027all-namespaces/\u0027 for non-privileged user\n2093691 - Creation flow drawer left padding is broken\n2093713 - Required fields in creation flow should be highlighted if empty\n2093715 - Optional parameters section in creation flow is missing bottom padding\n2093716 - CPU|Memory modal button should say \"Restore template settings?\n2093772 - Add a service in environment it reminds a pending change in boot order\n2093773 - Console crashed if adding a service without serial number\n2093866 - Cannot create vm from the template `vm-template-example`\n2093867 - OS for template \u0027vm-template-example\u0027 should matching the version of the image\n2094202 - Cloud-init username field should have hint\n2094207 - Cloud-init password field should have auto-generate option\n2094208 - SSH key input is missing validation\n2094217 - YAML view should reflect shanges in SSH form\n2094222 - \"?\" icon should be placed after red asterisk in required fields\n2094323 - Workload profile should be editable in template details page\n2094405 - adding resource on enviornment isnt showing on disks list when vm is running\n2094440 - Utilization pie charts figures are not based on current data\n2094451 - PVC selection in VM creation flow does not work for non-priv user\n2094453 - CD Source selection in VM creation flow is missing Upload option\n2094465 - Typo in Source tooltip\n2094471 - Node selector modal for non-privileged user\n2094481 - Tolerations modal for non-privileged user\n2094486 - Add affinity rule modal\n2094491 - Affinity rules modal button\n2094495 - Descheduler modal has same text in two lines\n2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id\n2094665 - Dedicated Resources modal for non-privileged user\n2094678 - Secrets and ConfigMaps can\u0027t be added to Windows VM\n2094727 - Creation flow should have VM info in header row\n2094807 - hardware devices dropdown has group title even with no devices in cluster\n2094813 - Cloudinit password is seen in wizard\n2094848 - Details card on Overview page - \u0027View details\u0027 link is missing\n2095125 - OS is empty in the clone modal\n2095129 - \"undefined\" appears in rootdisk line in clone modal\n2095224 - affinity modal for non-privileged users\n2095529 - VM migration cancelation in kebab action should have shorter name\n2095530 - Column sizes in VM list view\n2095532 - Node column in VM list view is visible to non-privileged user\n2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime\n2095570 - Details tab of VM should not have Node info for non-privileged user\n2095573 - Disks created as environment or scripts should have proper label\n2095953 - VNC console controls layout\n2095955 - VNC console tabs\n2096166 - Template \"vm-template-example\" is binding with namespace \"default\"\n2096206 - Inconsistent capitalization in Template Actions\n2096208 - Templates in the catalog list is not sorted\n2096263 - Incorrectly displaying units for Disks size or Memory field in various places\n2096333 - virtualization overview, related operators title is not aligned\n2096492 - Cannot create vm from a cloned template if its boot source is edited\n2096502 - \"Restore template settings\" should be removed from template CPU editor\n2096510 - VM can be created without any disk\n2096511 - Template shows \"no Boot Source\" and label \"Source available\" at the same time\n2096620 - in templates list, edit boot reference kebab action opens a modal with different title\n2096781 - Remove boot source provider while edit boot source reference\n2096801 - vnc thumbnail in virtual machine overview should be active on page load\n2096845 - Windows template\u0027s scripts tab is crashed\n2097328 - virtctl guestfs shouldn\u0027t required uid = 0\n2097370 - missing titles for optional parameters in wizard customization page\n2097465 - Count is not updating for \u0027prometheusrule\u0027 component when metrics kubevirt_hco_out_of_band_modifications_count executed\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2098134 - \"Workload profile\" column is not showing completely in template list\n2098135 - Workload is not showing correct in catalog after change the template\u0027s workload\n2098282 - Javascript error when changing boot source of custom template to be an uploaded file\n2099443 - No \"Quick create virtualmachine\" button for template \u0027vm-template-example\u0027\n2099533 - ConsoleQuickStart for HCO CR\u0027s VM is missing\n2099535 - The cdi-uploadproxy certificate url should be opened in a new tab\n2099539 - No storage option for upload while editing a disk\n2099566 - Cloudinit should be replaced by cloud-init in all places\n2099608 - \"DynamicB\" shows in vm-example disk size\n2099633 - Doc links needs to be updated\n2099639 - Remove user line from the ssh command section\n2099802 - Details card link shouldn\u0027t be hard-coded\n2100054 - Windows VM with WSL2 guest fails to migrate\n2100284 - Virtualization overview is crashed\n2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101485 - Cloudinit should be replaced by cloud-init in all places\n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer\n2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2102122 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2102124 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102127 - Cannot add NIC to VM template as non-priv user\n2102129 - All templates are labeling \"source available\" in template list page\n2102131 - The number of hardware devices is not correct in vm overview tab\n2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2102143 - vm clone modal is displaying DV size instead of PVC size\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102543 - Add button moved to right\n2102544 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102545 - VM filter has two \"Other\" checkboxes which are triggered together\n2104617 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106258 - All pages are crashed after visit Virtualization -\u003e Overview\n2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions\n2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111562 - kubevirt plugin console crashed after visit vmi page\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-14155"
},
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "161245"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "PACKETSTORM",
"id": "167206"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "168352"
},
{
"db": "PACKETSTORM",
"id": "168392"
}
],
"trust": 1.8
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-167005",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-14155",
"trust": 2.0
},
{
"db": "PACKETSTORM",
"id": "161245",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168352",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165862",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168392",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "164927",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165758",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167206",
"trust": 0.2
},
{
"db": "CNVD",
"id": "CNVD-2020-53121",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165296",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166051",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167956",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166308",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165286",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160545",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164928",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166489",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165287",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164967",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165002",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165288",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165129",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164825",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168036",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166309",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-167005",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168042",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "161245"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "PACKETSTORM",
"id": "167206"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "168352"
},
{
"db": "PACKETSTORM",
"id": "168392"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"id": "VAR-202006-0222",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T20:28:59.964000Z",
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-190",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20221028-0010/"
},
{
"trust": 1.1,
"url": "https://about.gitlab.com/releases/2020/07/01/security-release-13-1-2-release/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht211931"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht212147"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2020/dec/32"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2021/feb/14"
},
{
"trust": 1.1,
"url": "https://bugs.gentoo.org/717920"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.1,
"url": "https://www.pcre.org/original/changelog.txt"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772%40%3cdev.mina.apache.org%3e"
},
{
"trust": 0.9,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20095"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38185"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27191"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1621"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-42771"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4115"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-28493"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772@%3cdev.mina.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1742"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1757"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1753"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1751"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1744"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht212147."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29633"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1737"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1736"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1738"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1754"
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27904"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29608"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1745"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27938"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1743"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1758"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27937"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1750"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1746"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1747"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-1741"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26691"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13950"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17567"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35452"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30641"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30641"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17567"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13950"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35452"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38297"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distr_tracing_install/distr-tracing-updating.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distributed-tracing-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0318"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-12ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5425-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-13ubuntu0.22.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-13ubuntu0.21.10.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-9ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44225"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43818"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38593"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23648"
},
{
"trust": 0.1,
"url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4156"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5069"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28733"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29162"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3672"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://10.0.0.7:2379"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3697"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1706"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28734"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28737"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3695"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30323"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8559"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28500"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0686"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23337"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0639"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6429"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0512"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1798"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "161245"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "PACKETSTORM",
"id": "167206"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "168352"
},
{
"db": "PACKETSTORM",
"id": "168392"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "161245"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "PACKETSTORM",
"id": "167206"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "168352"
},
{
"db": "PACKETSTORM",
"id": "168392"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2020-06-15T00:00:00",
"db": "VULHUB",
"id": "VHN-167005"
},
{
"date": "2021-02-02T16:06:51",
"db": "PACKETSTORM",
"id": "161245"
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099"
},
{
"date": "2021-11-11T14:53:11",
"db": "PACKETSTORM",
"id": "164927"
},
{
"date": "2022-02-04T17:26:39",
"db": "PACKETSTORM",
"id": "165862"
},
{
"date": "2022-01-28T14:33:13",
"db": "PACKETSTORM",
"id": "165758"
},
{
"date": "2022-05-17T17:25:20",
"db": "PACKETSTORM",
"id": "167206"
},
{
"date": "2022-08-10T15:56:22",
"db": "PACKETSTORM",
"id": "168042"
},
{
"date": "2022-09-13T15:42:14",
"db": "PACKETSTORM",
"id": "168352"
},
{
"date": "2022-09-15T14:20:18",
"db": "PACKETSTORM",
"id": "168392"
},
{
"date": "2020-06-15T17:15:10.777000",
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-12-03T00:00:00",
"db": "VULHUB",
"id": "VHN-167005"
},
{
"date": "2024-03-27T16:04:48.863000",
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Apple Security Advisory 2021-02-01-1",
"sources": [
{
"db": "PACKETSTORM",
"id": "161245"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "168352"
}
],
"trust": 0.2
}
}
VAR-202104-1571
Vulnerability from variot - Updated: 2024-07-23 20:25A race condition in Linux kernel SCTP sockets (net/sctp/socket.c) before 5.12-rc8 can lead to kernel privilege escalation from the context of a network service or an unprivileged process. If sctp_destroy_sock is called without sock_net(sk)->sctp.addr_wq_lock then an element is removed from the auto_asconf_splist list without any proper locking. This can be exploited by an attacker with network service privileges to escalate to root or from the context of an unprivileged user directly if a BPF_CGROUP_INET_SOCK_CREATE is attached which denies creation of some SCTP socket. ========================================================================== Ubuntu Security Notice USN-4997-2 June 25, 2021
linux-kvm vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.04
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-kvm: Linux kernel for cloud environments
Details:
USN-4997-1 fixed vulnerabilities in the Linux kernel for Ubuntu 21.04. This update provides the corresponding updates for the Linux KVM kernel for Ubuntu 21.04. A local attacker could use this issue to execute arbitrary code. (CVE-2021-3609)
Piotr Krysiuk discovered that the eBPF implementation in the Linux kernel did not properly enforce limits for pointer operations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33200)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation did not properly clear received fragments from memory in some situations. A physically proximate attacker could possibly use this issue to inject packets or expose sensitive information. (CVE-2020-24586)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled encrypted fragments. A physically proximate attacker could possibly use this issue to decrypt fragments. (CVE-2020-24587)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled certain malformed frames. If a user were tricked into connecting to a malicious server, a physically proximate attacker could use this issue to inject packets. (CVE-2020-24588)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled EAPOL frames from unauthenticated senders. A physically proximate attacker could inject malicious packets to cause a denial of service (system crash). (CVE-2020-26139)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation did not properly verify certain fragmented frames. A physically proximate attacker could possibly use this issue to inject or decrypt packets. (CVE-2020-26141)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation accepted plaintext fragments in certain situations. A physically proximate attacker could use this issue to inject packets. (CVE-2020-26145)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation could reassemble mixed encrypted and plaintext fragments. A physically proximate attacker could possibly use this issue to inject packets or exfiltrate selected fragments. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-23133)
Or Cohen and Nadav Markus discovered a use-after-free vulnerability in the nfc implementation in the Linux kernel. A privileged local attacker could use this issue to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-23134)
Manfred Paul discovered that the extended Berkeley Packet Filter (eBPF) implementation in the Linux kernel contained an out-of-bounds vulnerability. A local attacker could use this issue to execute arbitrary code. (CVE-2021-31440)
Piotr Krysiuk discovered that the eBPF implementation in the Linux kernel did not properly prevent speculative loads in certain situations. A local attacker could use this to expose sensitive information (kernel memory). An attacker could use this issue to possibly execute arbitrary code. (CVE-2021-32399)
It was discovered that a use-after-free existed in the Bluetooth HCI driver of the Linux kernel. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33034)
It was discovered that an out-of-bounds (OOB) memory access flaw existed in the f2fs module of the Linux kernel. A local attacker could use this issue to cause a denial of service (system crash). (CVE-2021-3506)
Mathias Krause discovered that a null pointer dereference existed in the Nitro Enclaves kernel driver of the Linux kernel. A local attacker could use this issue to cause a denial of service or possibly execute arbitrary code. (CVE-2021-3543)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.04: linux-image-5.11.0-1009-kvm 5.11.0-1009.9 linux-image-kvm 5.11.0.1009.9
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-4997-2 https://ubuntu.com/security/notices/USN-4997-1 CVE-2020-24586, CVE-2020-24587, CVE-2020-24588, CVE-2020-26139, CVE-2020-26141, CVE-2020-26145, CVE-2020-26147, CVE-2021-23133, CVE-2021-23134, CVE-2021-31440, CVE-2021-31829, CVE-2021-32399, CVE-2021-33034, CVE-2021-33200, CVE-2021-3506, CVE-2021-3543, CVE-2021-3609
Package Information: https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9
. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es): * kernel: out-of-bounds reads in pinctrl subsystem. Bugs fixed (https://bugzilla.redhat.com/):
2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: kernel security, bug fix, and enhancement update Advisory ID: RHSA-2021:4356-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:4356 Issue date: 2021-11-09 CVE Names: CVE-2020-0427 CVE-2020-24502 CVE-2020-24503 CVE-2020-24504 CVE-2020-24586 CVE-2020-24587 CVE-2020-24588 CVE-2020-26139 CVE-2020-26140 CVE-2020-26141 CVE-2020-26143 CVE-2020-26144 CVE-2020-26145 CVE-2020-26146 CVE-2020-26147 CVE-2020-27777 CVE-2020-29368 CVE-2020-29660 CVE-2020-36158 CVE-2020-36386 CVE-2021-0129 CVE-2021-3348 CVE-2021-3489 CVE-2021-3564 CVE-2021-3573 CVE-2021-3600 CVE-2021-3635 CVE-2021-3659 CVE-2021-3679 CVE-2021-3732 CVE-2021-20194 CVE-2021-20239 CVE-2021-23133 CVE-2021-28950 CVE-2021-28971 CVE-2021-29155 CVE-2021-29646 CVE-2021-29650 CVE-2021-31440 CVE-2021-31829 CVE-2021-31916 CVE-2021-33200 ==================================================================== 1.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, x86_64
Security Fix(es): * kernel: out-of-bounds reads in pinctrl subsystem (CVE-2020-0427) * kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter drivers (CVE-2020-24502) * kernel: Insufficient access control in some Intel(R) Ethernet E810 Adapter drivers (CVE-2020-24503) * kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810 Adapter drivers (CVE-2020-24504) * kernel: Fragmentation cache not cleared on reconnection (CVE-2020-24586) * kernel: Reassembling fragments encrypted under different keys (CVE-2020-24587) * kernel: wifi frame payload being parsed incorrectly as an L2 frame (CVE-2020-24588) * kernel: Forwarding EAPOL from unauthenticated wifi client (CVE-2020-26139) * kernel: accepting plaintext data frames in protected networks (CVE-2020-26140) * kernel: not verifying TKIP MIC of fragmented frames (CVE-2020-26141) * kernel: accepting fragmented plaintext frames in protected networks (CVE-2020-26143) * kernel: accepting unencrypted A-MSDU frames that start with RFC1042 header (CVE-2020-26144) * kernel: accepting plaintext broadcast fragments as full frames (CVE-2020-26145) * kernel: powerpc: RTAS calls can be used to compromise kernel integrity (CVE-2020-27777) * kernel: locking inconsistency in tty_io.c and tty_jobctrl.c can lead to a read-after-free (CVE-2020-29660) * kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function via a long SSID value (CVE-2020-36158) * kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt() (CVE-2020-36386) * kernel: Improper access control in BlueZ may allow information disclosure vulnerability. (CVE-2021-0129) * kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c (CVE-2021-3348) * kernel: Linux kernel eBPF RINGBUF map oversized allocation (CVE-2021-3489) * kernel: double free in bluetooth subsystem when the HCI device initialization fails (CVE-2021-3564) * kernel: use-after-free in function hci_sock_bound_ioctl() (CVE-2021-3573) * kernel: eBPF 32-bit source register truncation on div/mod (CVE-2021-3600) * kernel: DoS in rb_per_cpu_empty() (CVE-2021-3679) * kernel: Mounting overlayfs inside an unprivileged user namespace can reveal files (CVE-2021-3732) * kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt() (CVE-2021-20194) * kernel: Race condition in sctp_destroy_sock list_del (CVE-2021-23133) * kernel: fuse: stall on CPU can occur because a retry loop continually finds the same bad inode (CVE-2021-28950) * kernel: System crash in intel_pmu_drain_pebs_nhm in arch/x86/events/intel/ds.c (CVE-2021-28971) * kernel: protection can be bypassed to leak content of kernel memory (CVE-2021-29155) * kernel: improper input validation in tipc_nl_retrieve_key function in net/tipc/node.c (CVE-2021-29646) * kernel: lack a full memory barrier may lead to DoS (CVE-2021-29650) * kernel: local escalation of privileges in handling of eBPF programs (CVE-2021-31440) * kernel: protection of stack pointer against speculative pointer arithmetic can be bypassed to leak content of kernel memory (CVE-2021-31829) * kernel: out-of-bounds reads and writes due to enforcing incorrect limits for pointer arithmetic operations by BPF verifier (CVE-2021-33200) * kernel: reassembling encrypted fragments with non-consecutive packet numbers (CVE-2020-26146) * kernel: reassembling mixed encrypted/plaintext fragments (CVE-2020-26147) * kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check (CVE-2020-29368) * kernel: flowtable list del corruption with kernel BUG at lib/list_debug.c:50 (CVE-2021-3635) * kernel: NULL pointer dereference in llsec_key_alloc() in net/mac802154/llsec.c (CVE-2021-3659) * kernel: setsockopt System Call Untrusted Pointer Dereference Information Disclosure (CVE-2021-20239) * kernel: out of bounds array access in drivers/md/dm-ioctl.c (CVE-2021-31916)
- Solution:
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section.
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1509204 - dlm: Add ability to set SO_MARK on DLM sockets
1793880 - Unreliable RTC synchronization (11-minute mode)
1816493 - [RHEL 8.3] Discard request from mkfs.xfs takes too much time on raid10
1900844 - CVE-2020-27777 kernel: powerpc: RTAS calls can be used to compromise kernel integrity
1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check
1906522 - CVE-2020-29660 kernel: locking inconsistency in drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c can lead to a read-after-free
1912683 - CVE-2021-20194 kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt()
1913348 - CVE-2020-36158 kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function in drivers/net/wireless/marvell/mwifiex/join.c via a long SSID value
1915825 - Allow falling back to genfscon labeling when the FS doesn't support xattrs and there is a fs_use_xattr rule for it
1919893 - CVE-2020-0427 kernel: out-of-bounds reads in pinctrl subsystem.
1921958 - CVE-2021-3348 kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c
1923636 - CVE-2021-20239 kernel: setsockopt System Call Untrusted Pointer Dereference Information Disclosure
1930376 - CVE-2020-24504 kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810 Adapter drivers
1930379 - CVE-2020-24502 kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter drivers
1930381 - CVE-2020-24503 kernel: Insufficient access control in some Intel(R) Ethernet E810 Adapter drivers
1933527 - Files on cifs mount can get mixed contents when underlying file is removed but inode number is reused, when mounted with 'serverino' and 'cache=strict '
1939341 - CNB: net: add inline function skb_csum_is_sctp
1941762 - CVE-2021-28950 kernel: fuse: stall on CPU can occur because a retry loop continually finds the same bad inode
1941784 - CVE-2021-28971 kernel: System crash in intel_pmu_drain_pebs_nhm in arch/x86/events/intel/ds.c
1945345 - CVE-2021-29646 kernel: improper input validation in tipc_nl_retrieve_key function in net/tipc/node.c
1945388 - CVE-2021-29650 kernel: lack a full memory barrier upon the assignment of a new table value in net/netfilter/x_tables.c and include/linux/netfilter/x_tables.h may lead to DoS
1946965 - CVE-2021-31916 kernel: out of bounds array access in drivers/md/dm-ioctl.c
1948772 - CVE-2021-23133 kernel: Race condition in sctp_destroy_sock list_del
1951595 - CVE-2021-29155 kernel: protection for sequences of pointer arithmetic operations against speculatively out-of-bounds loads can be bypassed to leak content of kernel memory
1953847 - [ethtool] The NLM_F_MULTI should be used for NLM_F_DUMP
1954588 - RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps.
1957788 - CVE-2021-31829 kernel: protection of stack pointer against speculative pointer arithmetic can be bypassed to leak content of kernel memory
1959559 - CVE-2021-3489 kernel: Linux kernel eBPF RINGBUF map oversized allocation
1959642 - CVE-2020-24586 kernel: Fragmentation cache not cleared on reconnection
1959654 - CVE-2020-24587 kernel: Reassembling fragments encrypted under different keys
1959657 - CVE-2020-24588 kernel: wifi frame payload being parsed incorrectly as an L2 frame
1959663 - CVE-2020-26139 kernel: Forwarding EAPOL from unauthenticated wifi client
1960490 - CVE-2020-26140 kernel: accepting plaintext data frames in protected networks
1960492 - CVE-2020-26141 kernel: not verifying TKIP MIC of fragmented frames
1960496 - CVE-2020-26143 kernel: accepting fragmented plaintext frames in protected networks
1960498 - CVE-2020-26144 kernel: accepting unencrypted A-MSDU frames that start with RFC1042 header
1960500 - CVE-2020-26145 kernel: accepting plaintext broadcast fragments as full frames
1960502 - CVE-2020-26146 kernel: reassembling encrypted fragments with non-consecutive packet numbers
1960504 - CVE-2020-26147 kernel: reassembling mixed encrypted/plaintext fragments
1960708 - please add CAP_CHECKPOINT_RESTORE to capability.h
1964028 - CVE-2021-31440 kernel: local escalation of privileges in handling of eBPF programs
1964139 - CVE-2021-3564 kernel: double free in bluetooth subsystem when the HCI device initialization fails
1965038 - CVE-2021-0129 kernel: Improper access control in BlueZ may allow information disclosure vulnerability.
1965360 - kernel: get_timespec64 does not ignore padding in compat syscalls
1965458 - CVE-2021-33200 kernel: out-of-bounds reads and writes due to enforcing incorrect limits for pointer arithmetic operations by BPF verifier
1966578 - CVE-2021-3573 kernel: use-after-free in function hci_sock_bound_ioctl()
1969489 - CVE-2020-36386 kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt() in net/bluetooth/hci_event.c
1971101 - ceph: potential data corruption in cephfs write_begin codepath
1972278 - libceph: allow addrvecs with a single NONE/blank address
1974627 - [TIPC] kernel BUG at lib/list_debug.c:31!
1975182 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer [rhel-8.5.0]
1975949 - CVE-2021-3659 kernel: NULL pointer dereference in llsec_key_alloc() in net/mac802154/llsec.c
1976679 - blk-mq: fix/improve io scheduler batching dispatch
1976699 - [SCTP]WARNING: CPU: 29 PID: 3165 at mm/page_alloc.c:4579 __alloc_pages_slowpath+0xb74/0xd00
1976946 - CVE-2021-3635 kernel: flowtable list del corruption with kernel BUG at lib/list_debug.c:50
1976969 - XFS: followup to XFS sync to upstream v5.10 (re BZ1937116)
1977162 - [XDP] test program warning: libbpf: elf: skipping unrecognized data section(16) .eh_frame
1977422 - Missing backport of IMA boot aggregate calculation in rhel 8.4 kernel
1977537 - RHEL8.5: Update the kernel workqueue code to v5.12 level
1977850 - geneve virtual devices lack the NETIF_F_FRAGLIST feature
1978369 - dm writecache: sync with upstream 5.14
1979070 - Inaccessible NFS server overloads clients (native_queued_spin_lock_slowpath connotation?)
1979680 - Backport openvswitch tracepoints
1981954 - CVE-2021-3600 kernel: eBPF 32-bit source register truncation on div/mod
1986138 - Lockd invalid cast to nlm_lockowner
1989165 - CVE-2021-3679 kernel: DoS in rb_per_cpu_empty()
1989999 - ceph omnibus backport for RHEL-8.5.0
1991976 - block: fix New warning in nvme_setup_discard
1992700 - blk-mq: fix kernel panic when iterating over flush request
1995249 - CVE-2021-3732 kernel: overlayfs: Mounting overlayfs inside an unprivileged user namespace can reveal files
1996854 - dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc()
- Package List:
Red Hat Enterprise Linux BaseOS (v. 8):
Source: kernel-4.18.0-348.el8.src.rpm
aarch64: bpftool-4.18.0-348.el8.aarch64.rpm bpftool-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-4.18.0-348.el8.aarch64.rpm kernel-core-4.18.0-348.el8.aarch64.rpm kernel-cross-headers-4.18.0-348.el8.aarch64.rpm kernel-debug-4.18.0-348.el8.aarch64.rpm kernel-debug-core-4.18.0-348.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debug-devel-4.18.0-348.el8.aarch64.rpm kernel-debug-modules-4.18.0-348.el8.aarch64.rpm kernel-debug-modules-extra-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm kernel-devel-4.18.0-348.el8.aarch64.rpm kernel-headers-4.18.0-348.el8.aarch64.rpm kernel-modules-4.18.0-348.el8.aarch64.rpm kernel-modules-extra-4.18.0-348.el8.aarch64.rpm kernel-tools-4.18.0-348.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-tools-libs-4.18.0-348.el8.aarch64.rpm perf-4.18.0-348.el8.aarch64.rpm perf-debuginfo-4.18.0-348.el8.aarch64.rpm python3-perf-4.18.0-348.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm
noarch: kernel-abi-stablelists-4.18.0-348.el8.noarch.rpm kernel-doc-4.18.0-348.el8.noarch.rpm
ppc64le: bpftool-4.18.0-348.el8.ppc64le.rpm bpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-4.18.0-348.el8.ppc64le.rpm kernel-core-4.18.0-348.el8.ppc64le.rpm kernel-cross-headers-4.18.0-348.el8.ppc64le.rpm kernel-debug-4.18.0-348.el8.ppc64le.rpm kernel-debug-core-4.18.0-348.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debug-devel-4.18.0-348.el8.ppc64le.rpm kernel-debug-modules-4.18.0-348.el8.ppc64le.rpm kernel-debug-modules-extra-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm kernel-devel-4.18.0-348.el8.ppc64le.rpm kernel-headers-4.18.0-348.el8.ppc64le.rpm kernel-modules-4.18.0-348.el8.ppc64le.rpm kernel-modules-extra-4.18.0-348.el8.ppc64le.rpm kernel-tools-4.18.0-348.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-tools-libs-4.18.0-348.el8.ppc64le.rpm perf-4.18.0-348.el8.ppc64le.rpm perf-debuginfo-4.18.0-348.el8.ppc64le.rpm python3-perf-4.18.0-348.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm
s390x: bpftool-4.18.0-348.el8.s390x.rpm bpftool-debuginfo-4.18.0-348.el8.s390x.rpm kernel-4.18.0-348.el8.s390x.rpm kernel-core-4.18.0-348.el8.s390x.rpm kernel-cross-headers-4.18.0-348.el8.s390x.rpm kernel-debug-4.18.0-348.el8.s390x.rpm kernel-debug-core-4.18.0-348.el8.s390x.rpm kernel-debug-debuginfo-4.18.0-348.el8.s390x.rpm kernel-debug-devel-4.18.0-348.el8.s390x.rpm kernel-debug-modules-4.18.0-348.el8.s390x.rpm kernel-debug-modules-extra-4.18.0-348.el8.s390x.rpm kernel-debuginfo-4.18.0-348.el8.s390x.rpm kernel-debuginfo-common-s390x-4.18.0-348.el8.s390x.rpm kernel-devel-4.18.0-348.el8.s390x.rpm kernel-headers-4.18.0-348.el8.s390x.rpm kernel-modules-4.18.0-348.el8.s390x.rpm kernel-modules-extra-4.18.0-348.el8.s390x.rpm kernel-tools-4.18.0-348.el8.s390x.rpm kernel-tools-debuginfo-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-core-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-debuginfo-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-devel-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-modules-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-modules-extra-4.18.0-348.el8.s390x.rpm perf-4.18.0-348.el8.s390x.rpm perf-debuginfo-4.18.0-348.el8.s390x.rpm python3-perf-4.18.0-348.el8.s390x.rpm python3-perf-debuginfo-4.18.0-348.el8.s390x.rpm
x86_64: bpftool-4.18.0-348.el8.x86_64.rpm bpftool-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-4.18.0-348.el8.x86_64.rpm kernel-core-4.18.0-348.el8.x86_64.rpm kernel-cross-headers-4.18.0-348.el8.x86_64.rpm kernel-debug-4.18.0-348.el8.x86_64.rpm kernel-debug-core-4.18.0-348.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debug-devel-4.18.0-348.el8.x86_64.rpm kernel-debug-modules-4.18.0-348.el8.x86_64.rpm kernel-debug-modules-extra-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm kernel-devel-4.18.0-348.el8.x86_64.rpm kernel-headers-4.18.0-348.el8.x86_64.rpm kernel-modules-4.18.0-348.el8.x86_64.rpm kernel-modules-extra-4.18.0-348.el8.x86_64.rpm kernel-tools-4.18.0-348.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-tools-libs-4.18.0-348.el8.x86_64.rpm perf-4.18.0-348.el8.x86_64.rpm perf-debuginfo-4.18.0-348.el8.x86_64.rpm python3-perf-4.18.0-348.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm
Red Hat Enterprise Linux CRB (v. 8):
aarch64: bpftool-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-tools-libs-devel-4.18.0-348.el8.aarch64.rpm perf-debuginfo-4.18.0-348.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm
ppc64le: bpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-tools-libs-devel-4.18.0-348.el8.ppc64le.rpm perf-debuginfo-4.18.0-348.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm
x86_64: bpftool-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-tools-libs-devel-4.18.0-348.el8.x86_64.rpm perf-debuginfo-4.18.0-348.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYYrdRdzjgjWX9erEAQhs0w//as9X4T+FCf3TAbcNIStxlOK6fbJoAlST FrgNJnRH3RmT+VxRSLWZcsJQf78kudeJWtMezbGSVREfhCMBCGhKZ7mvVp5P7J8l bobmdaap3hqkPqq66VuKxGuS+6j0rXXgGQH034yzoX+L/lx6KV9qdAnZZO+7kWcy SfX0GkLg0ARDMfsoUKwVmeUeNLhPlJ4ZH2rBdZ4FhjyEAG/5yL9JwU/VNReWHjhW HgarTuSnFR3vLQDKyjMIEEiBPOI162hS2j3Ba/A/1hJ70HOjloJnd0eWYGxSuIfC DRrzlacFNAzBPZsbRFi1plXrHh5LtNoBBWjl+xyb6jRsB8eXgS+WhzUhOXGUv01E lJTwFy5Kz71d+cAhRXgmz5gVgWuoNJw8AEImefWcy4n0EEK55vdFe0Sl7BfZiwpD Jhx97He6OurNnLrYyJJ0+TsU1L33794Ag2AJZnN1PLFUyrKKNlD1ZWtdsJg99klK dQteUTnnUhgDG5Tqulf0wX19BEkLd/O6CRyGueJcV4h4PFpSoWOh5Yy/BlokFzc8 zf14PjuVueIodaIUXtK+70Zmw7tg09Dx5Asyfuk5hWFPYv856nHlDn7PT724CU8v 1cp96h1IjLR6cF17NO2JCcbU0XZEW+aCkGkPcsY8DhBmaZqxUxXObvTD80Mm7EvN +PuV5cms0sE=2UUA -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
For Red Hat OpenShift Logging 5.3, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
6
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202104-1571",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.11"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.5"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.15"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire \\\u0026 hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.11.16"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.232"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.4.114"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.189"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.20"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.10.32"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.10"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.11.16",
"versionStartIncluding": "5.11",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.10.32",
"versionStartIncluding": "5.5",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.4.114",
"versionStartIncluding": "4.20",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.19.189",
"versionStartIncluding": "4.15",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.14.232",
"versionStartIncluding": "4.10",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:solidfire_baseboard_management_controller:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Ubuntu",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
}
],
"trust": 0.5
},
"cve": "CVE-2021-23133",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "COMPLETE",
"baseScore": 6.9,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.4,
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:M/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULMON",
"availabilityImpact": "COMPLETE",
"baseScore": 6.9,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.4,
"id": "CVE-2021-23133",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "MEDIUM",
"trust": 0.1,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:M/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.0,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.0,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "psirt@paloaltonetworks.com",
"availabilityImpact": "HIGH",
"baseScore": 6.7,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 0.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "HIGH",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-23133",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "psirt@paloaltonetworks.com",
"id": "CVE-2021-23133",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-23133",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "A race condition in Linux kernel SCTP sockets (net/sctp/socket.c) before 5.12-rc8 can lead to kernel privilege escalation from the context of a network service or an unprivileged process. If sctp_destroy_sock is called without sock_net(sk)-\u003esctp.addr_wq_lock then an element is removed from the auto_asconf_splist list without any proper locking. This can be exploited by an attacker with network service privileges to escalate to root or from the context of an unprivileged user directly if a BPF_CGROUP_INET_SOCK_CREATE is attached which denies creation of some SCTP socket. ==========================================================================\nUbuntu Security Notice USN-4997-2\nJune 25, 2021\n\nlinux-kvm vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.04\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-kvm: Linux kernel for cloud environments\n\nDetails:\n\nUSN-4997-1 fixed vulnerabilities in the Linux kernel for Ubuntu 21.04. \nThis update provides the corresponding updates for the Linux KVM\nkernel for Ubuntu 21.04. A local attacker could use this issue to execute arbitrary\ncode. (CVE-2021-3609)\n\nPiotr Krysiuk discovered that the eBPF implementation in the Linux kernel\ndid not properly enforce limits for pointer operations. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-33200)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation did\nnot properly clear received fragments from memory in some situations. A\nphysically proximate attacker could possibly use this issue to inject\npackets or expose sensitive information. (CVE-2020-24586)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled encrypted fragments. A physically proximate attacker\ncould possibly use this issue to decrypt fragments. (CVE-2020-24587)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled certain malformed frames. If a user were tricked into\nconnecting to a malicious server, a physically proximate attacker could use\nthis issue to inject packets. (CVE-2020-24588)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled EAPOL frames from unauthenticated senders. A physically\nproximate attacker could inject malicious packets to cause a denial of\nservice (system crash). (CVE-2020-26139)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation did\nnot properly verify certain fragmented frames. A physically proximate\nattacker could possibly use this issue to inject or decrypt packets. \n(CVE-2020-26141)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\naccepted plaintext fragments in certain situations. A physically proximate\nattacker could use this issue to inject packets. (CVE-2020-26145)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation could\nreassemble mixed encrypted and plaintext fragments. A physically proximate\nattacker could possibly use this issue to inject packets or exfiltrate\nselected fragments. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2021-23133)\n\nOr Cohen and Nadav Markus discovered a use-after-free vulnerability in the\nnfc implementation in the Linux kernel. A privileged local attacker could\nuse this issue to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-23134)\n\nManfred Paul discovered that the extended Berkeley Packet Filter (eBPF)\nimplementation in the Linux kernel contained an out-of-bounds\nvulnerability. A local attacker could use this issue to execute arbitrary\ncode. (CVE-2021-31440)\n\nPiotr Krysiuk discovered that the eBPF implementation in the Linux kernel\ndid not properly prevent speculative loads in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). An attacker could use this\nissue to possibly execute arbitrary code. (CVE-2021-32399)\n\nIt was discovered that a use-after-free existed in the Bluetooth HCI driver\nof the Linux kernel. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2021-33034)\n\nIt was discovered that an out-of-bounds (OOB) memory access flaw existed in\nthe f2fs module of the Linux kernel. A local attacker could use this issue\nto cause a denial of service (system crash). (CVE-2021-3506)\n\nMathias Krause discovered that a null pointer dereference existed in the\nNitro Enclaves kernel driver of the Linux kernel. A local attacker could\nuse this issue to cause a denial of service or possibly execute arbitrary\ncode. (CVE-2021-3543)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.04:\n linux-image-5.11.0-1009-kvm 5.11.0-1009.9\n linux-image-kvm 5.11.0.1009.9\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-4997-2\n https://ubuntu.com/security/notices/USN-4997-1\n CVE-2020-24586, CVE-2020-24587, CVE-2020-24588, CVE-2020-26139,\n CVE-2020-26141, CVE-2020-26145, CVE-2020-26147, CVE-2021-23133,\n CVE-2021-23134, CVE-2021-31440, CVE-2021-31829, CVE-2021-32399,\n CVE-2021-33034, CVE-2021-33200, CVE-2021-3506, CVE-2021-3543,\n CVE-2021-3609\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9\n\n. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n* kernel: out-of-bounds reads in pinctrl subsystem. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: kernel security, bug fix, and enhancement update\nAdvisory ID: RHSA-2021:4356-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4356\nIssue date: 2021-11-09\nCVE Names: CVE-2020-0427 CVE-2020-24502 CVE-2020-24503\n CVE-2020-24504 CVE-2020-24586 CVE-2020-24587\n CVE-2020-24588 CVE-2020-26139 CVE-2020-26140\n CVE-2020-26141 CVE-2020-26143 CVE-2020-26144\n CVE-2020-26145 CVE-2020-26146 CVE-2020-26147\n CVE-2020-27777 CVE-2020-29368 CVE-2020-29660\n CVE-2020-36158 CVE-2020-36386 CVE-2021-0129\n CVE-2021-3348 CVE-2021-3489 CVE-2021-3564\n CVE-2021-3573 CVE-2021-3600 CVE-2021-3635\n CVE-2021-3659 CVE-2021-3679 CVE-2021-3732\n CVE-2021-20194 CVE-2021-20239 CVE-2021-23133\n CVE-2021-28950 CVE-2021-28971 CVE-2021-29155\n CVE-2021-29646 CVE-2021-29650 CVE-2021-31440\n CVE-2021-31829 CVE-2021-31916 CVE-2021-33200\n====================================================================\n1. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, x86_64\n\n3. \n\nSecurity Fix(es):\n* kernel: out-of-bounds reads in pinctrl subsystem (CVE-2020-0427)\n* kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter\ndrivers (CVE-2020-24502)\n* kernel: Insufficient access control in some Intel(R) Ethernet E810\nAdapter drivers (CVE-2020-24503)\n* kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810\nAdapter drivers (CVE-2020-24504)\n* kernel: Fragmentation cache not cleared on reconnection (CVE-2020-24586)\n* kernel: Reassembling fragments encrypted under different keys\n(CVE-2020-24587)\n* kernel: wifi frame payload being parsed incorrectly as an L2 frame\n(CVE-2020-24588)\n* kernel: Forwarding EAPOL from unauthenticated wifi client\n(CVE-2020-26139)\n* kernel: accepting plaintext data frames in protected networks\n(CVE-2020-26140)\n* kernel: not verifying TKIP MIC of fragmented frames (CVE-2020-26141)\n* kernel: accepting fragmented plaintext frames in protected networks\n(CVE-2020-26143)\n* kernel: accepting unencrypted A-MSDU frames that start with RFC1042\nheader (CVE-2020-26144)\n* kernel: accepting plaintext broadcast fragments as full frames\n(CVE-2020-26145)\n* kernel: powerpc: RTAS calls can be used to compromise kernel integrity\n(CVE-2020-27777)\n* kernel: locking inconsistency in tty_io.c and tty_jobctrl.c can lead to a\nread-after-free (CVE-2020-29660)\n* kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function via a\nlong SSID value (CVE-2020-36158)\n* kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt()\n(CVE-2020-36386)\n* kernel: Improper access control in BlueZ may allow information disclosure\nvulnerability. (CVE-2021-0129)\n* kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c\n(CVE-2021-3348)\n* kernel: Linux kernel eBPF RINGBUF map oversized allocation\n(CVE-2021-3489)\n* kernel: double free in bluetooth subsystem when the HCI device\ninitialization fails (CVE-2021-3564)\n* kernel: use-after-free in function hci_sock_bound_ioctl() (CVE-2021-3573)\n* kernel: eBPF 32-bit source register truncation on div/mod (CVE-2021-3600)\n* kernel: DoS in rb_per_cpu_empty() (CVE-2021-3679)\n* kernel: Mounting overlayfs inside an unprivileged user namespace can\nreveal files (CVE-2021-3732)\n* kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt()\n(CVE-2021-20194)\n* kernel: Race condition in sctp_destroy_sock list_del (CVE-2021-23133)\n* kernel: fuse: stall on CPU can occur because a retry loop continually\nfinds the same bad inode (CVE-2021-28950)\n* kernel: System crash in intel_pmu_drain_pebs_nhm in\narch/x86/events/intel/ds.c (CVE-2021-28971)\n* kernel: protection can be bypassed to leak content of kernel memory\n(CVE-2021-29155)\n* kernel: improper input validation in tipc_nl_retrieve_key function in\nnet/tipc/node.c (CVE-2021-29646)\n* kernel: lack a full memory barrier may lead to DoS (CVE-2021-29650)\n* kernel: local escalation of privileges in handling of eBPF programs\n(CVE-2021-31440)\n* kernel: protection of stack pointer against speculative pointer\narithmetic can be bypassed to leak content of kernel memory\n(CVE-2021-31829)\n* kernel: out-of-bounds reads and writes due to enforcing incorrect limits\nfor pointer arithmetic operations by BPF verifier (CVE-2021-33200)\n* kernel: reassembling encrypted fragments with non-consecutive packet\nnumbers (CVE-2020-26146)\n* kernel: reassembling mixed encrypted/plaintext fragments (CVE-2020-26147)\n* kernel: the copy-on-write implementation can grant unintended write\naccess because of a race condition in a THP mapcount check (CVE-2020-29368)\n* kernel: flowtable list del corruption with kernel BUG at\nlib/list_debug.c:50 (CVE-2021-3635)\n* kernel: NULL pointer dereference in llsec_key_alloc() in\nnet/mac802154/llsec.c (CVE-2021-3659)\n* kernel: setsockopt System Call Untrusted Pointer Dereference Information\nDisclosure (CVE-2021-20239)\n* kernel: out of bounds array access in drivers/md/dm-ioctl.c\n(CVE-2021-31916)\n\n4. Solution:\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1509204 - dlm: Add ability to set SO_MARK on DLM sockets\n1793880 - Unreliable RTC synchronization (11-minute mode)\n1816493 - [RHEL 8.3] Discard request from mkfs.xfs takes too much time on raid10\n1900844 - CVE-2020-27777 kernel: powerpc: RTAS calls can be used to compromise kernel integrity\n1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check\n1906522 - CVE-2020-29660 kernel: locking inconsistency in drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c can lead to a read-after-free\n1912683 - CVE-2021-20194 kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt()\n1913348 - CVE-2020-36158 kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function in drivers/net/wireless/marvell/mwifiex/join.c via a long SSID value\n1915825 - Allow falling back to genfscon labeling when the FS doesn\u0027t support xattrs and there is a fs_use_xattr rule for it\n1919893 - CVE-2020-0427 kernel: out-of-bounds reads in pinctrl subsystem. \n1921958 - CVE-2021-3348 kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c\n1923636 - CVE-2021-20239 kernel: setsockopt System Call Untrusted Pointer Dereference Information Disclosure\n1930376 - CVE-2020-24504 kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810 Adapter drivers\n1930379 - CVE-2020-24502 kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter drivers\n1930381 - CVE-2020-24503 kernel: Insufficient access control in some Intel(R) Ethernet E810 Adapter drivers\n1933527 - Files on cifs mount can get mixed contents when underlying file is removed but inode number is reused, when mounted with \u0027serverino\u0027 and \u0027cache=strict \u0027\n1939341 - CNB: net: add inline function skb_csum_is_sctp\n1941762 - CVE-2021-28950 kernel: fuse: stall on CPU can occur because a retry loop continually finds the same bad inode\n1941784 - CVE-2021-28971 kernel: System crash in intel_pmu_drain_pebs_nhm in arch/x86/events/intel/ds.c\n1945345 - CVE-2021-29646 kernel: improper input validation in tipc_nl_retrieve_key function in net/tipc/node.c\n1945388 - CVE-2021-29650 kernel: lack a full memory barrier upon the assignment of a new table value in net/netfilter/x_tables.c and include/linux/netfilter/x_tables.h may lead to DoS\n1946965 - CVE-2021-31916 kernel: out of bounds array access in drivers/md/dm-ioctl.c\n1948772 - CVE-2021-23133 kernel: Race condition in sctp_destroy_sock list_del\n1951595 - CVE-2021-29155 kernel: protection for sequences of pointer arithmetic operations against speculatively out-of-bounds loads can be bypassed to leak content of kernel memory\n1953847 - [ethtool] The `NLM_F_MULTI` should be used for `NLM_F_DUMP`\n1954588 - RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps. \n1957788 - CVE-2021-31829 kernel: protection of stack pointer against speculative pointer arithmetic can be bypassed to leak content of kernel memory\n1959559 - CVE-2021-3489 kernel: Linux kernel eBPF RINGBUF map oversized allocation\n1959642 - CVE-2020-24586 kernel: Fragmentation cache not cleared on reconnection\n1959654 - CVE-2020-24587 kernel: Reassembling fragments encrypted under different keys\n1959657 - CVE-2020-24588 kernel: wifi frame payload being parsed incorrectly as an L2 frame\n1959663 - CVE-2020-26139 kernel: Forwarding EAPOL from unauthenticated wifi client\n1960490 - CVE-2020-26140 kernel: accepting plaintext data frames in protected networks\n1960492 - CVE-2020-26141 kernel: not verifying TKIP MIC of fragmented frames\n1960496 - CVE-2020-26143 kernel: accepting fragmented plaintext frames in protected networks\n1960498 - CVE-2020-26144 kernel: accepting unencrypted A-MSDU frames that start with RFC1042 header\n1960500 - CVE-2020-26145 kernel: accepting plaintext broadcast fragments as full frames\n1960502 - CVE-2020-26146 kernel: reassembling encrypted fragments with non-consecutive packet numbers\n1960504 - CVE-2020-26147 kernel: reassembling mixed encrypted/plaintext fragments\n1960708 - please add CAP_CHECKPOINT_RESTORE to capability.h\n1964028 - CVE-2021-31440 kernel: local escalation of privileges in handling of eBPF programs\n1964139 - CVE-2021-3564 kernel: double free in bluetooth subsystem when the HCI device initialization fails\n1965038 - CVE-2021-0129 kernel: Improper access control in BlueZ may allow information disclosure vulnerability. \n1965360 - kernel: get_timespec64 does not ignore padding in compat syscalls\n1965458 - CVE-2021-33200 kernel: out-of-bounds reads and writes due to enforcing incorrect limits for pointer arithmetic operations by BPF verifier\n1966578 - CVE-2021-3573 kernel: use-after-free in function hci_sock_bound_ioctl()\n1969489 - CVE-2020-36386 kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt() in net/bluetooth/hci_event.c\n1971101 - ceph: potential data corruption in cephfs write_begin codepath\n1972278 - libceph: allow addrvecs with a single NONE/blank address\n1974627 - [TIPC] kernel BUG at lib/list_debug.c:31!\n1975182 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer [rhel-8.5.0]\n1975949 - CVE-2021-3659 kernel: NULL pointer dereference in llsec_key_alloc() in net/mac802154/llsec.c\n1976679 - blk-mq: fix/improve io scheduler batching dispatch\n1976699 - [SCTP]WARNING: CPU: 29 PID: 3165 at mm/page_alloc.c:4579 __alloc_pages_slowpath+0xb74/0xd00\n1976946 - CVE-2021-3635 kernel: flowtable list del corruption with kernel BUG at lib/list_debug.c:50\n1976969 - XFS: followup to XFS sync to upstream v5.10 (re BZ1937116)\n1977162 - [XDP] test program warning: libbpf: elf: skipping unrecognized data section(16) .eh_frame\n1977422 - Missing backport of IMA boot aggregate calculation in rhel 8.4 kernel\n1977537 - RHEL8.5: Update the kernel workqueue code to v5.12 level\n1977850 - geneve virtual devices lack the NETIF_F_FRAGLIST feature\n1978369 - dm writecache: sync with upstream 5.14\n1979070 - Inaccessible NFS server overloads clients (native_queued_spin_lock_slowpath connotation?)\n1979680 - Backport openvswitch tracepoints\n1981954 - CVE-2021-3600 kernel: eBPF 32-bit source register truncation on div/mod\n1986138 - Lockd invalid cast to nlm_lockowner\n1989165 - CVE-2021-3679 kernel: DoS in rb_per_cpu_empty()\n1989999 - ceph omnibus backport for RHEL-8.5.0\n1991976 - block: fix New warning in nvme_setup_discard\n1992700 - blk-mq: fix kernel panic when iterating over flush request\n1995249 - CVE-2021-3732 kernel: overlayfs: Mounting overlayfs inside an unprivileged user namespace can reveal files\n1996854 - dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc()\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nkernel-4.18.0-348.el8.src.rpm\n\naarch64:\nbpftool-4.18.0-348.el8.aarch64.rpm\nbpftool-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-4.18.0-348.el8.aarch64.rpm\nkernel-core-4.18.0-348.el8.aarch64.rpm\nkernel-cross-headers-4.18.0-348.el8.aarch64.rpm\nkernel-debug-4.18.0-348.el8.aarch64.rpm\nkernel-debug-core-4.18.0-348.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debug-devel-4.18.0-348.el8.aarch64.rpm\nkernel-debug-modules-4.18.0-348.el8.aarch64.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm\nkernel-devel-4.18.0-348.el8.aarch64.rpm\nkernel-headers-4.18.0-348.el8.aarch64.rpm\nkernel-modules-4.18.0-348.el8.aarch64.rpm\nkernel-modules-extra-4.18.0-348.el8.aarch64.rpm\nkernel-tools-4.18.0-348.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-tools-libs-4.18.0-348.el8.aarch64.rpm\nperf-4.18.0-348.el8.aarch64.rpm\nperf-debuginfo-4.18.0-348.el8.aarch64.rpm\npython3-perf-4.18.0-348.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm\n\nnoarch:\nkernel-abi-stablelists-4.18.0-348.el8.noarch.rpm\nkernel-doc-4.18.0-348.el8.noarch.rpm\n\nppc64le:\nbpftool-4.18.0-348.el8.ppc64le.rpm\nbpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-4.18.0-348.el8.ppc64le.rpm\nkernel-core-4.18.0-348.el8.ppc64le.rpm\nkernel-cross-headers-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-core-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-devel-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-modules-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm\nkernel-devel-4.18.0-348.el8.ppc64le.rpm\nkernel-headers-4.18.0-348.el8.ppc64le.rpm\nkernel-modules-4.18.0-348.el8.ppc64le.rpm\nkernel-modules-extra-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-libs-4.18.0-348.el8.ppc64le.rpm\nperf-4.18.0-348.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-348.el8.ppc64le.rpm\npython3-perf-4.18.0-348.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm\n\ns390x:\nbpftool-4.18.0-348.el8.s390x.rpm\nbpftool-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-4.18.0-348.el8.s390x.rpm\nkernel-core-4.18.0-348.el8.s390x.rpm\nkernel-cross-headers-4.18.0-348.el8.s390x.rpm\nkernel-debug-4.18.0-348.el8.s390x.rpm\nkernel-debug-core-4.18.0-348.el8.s390x.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-debug-devel-4.18.0-348.el8.s390x.rpm\nkernel-debug-modules-4.18.0-348.el8.s390x.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.s390x.rpm\nkernel-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-debuginfo-common-s390x-4.18.0-348.el8.s390x.rpm\nkernel-devel-4.18.0-348.el8.s390x.rpm\nkernel-headers-4.18.0-348.el8.s390x.rpm\nkernel-modules-4.18.0-348.el8.s390x.rpm\nkernel-modules-extra-4.18.0-348.el8.s390x.rpm\nkernel-tools-4.18.0-348.el8.s390x.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-core-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-devel-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-modules-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-modules-extra-4.18.0-348.el8.s390x.rpm\nperf-4.18.0-348.el8.s390x.rpm\nperf-debuginfo-4.18.0-348.el8.s390x.rpm\npython3-perf-4.18.0-348.el8.s390x.rpm\npython3-perf-debuginfo-4.18.0-348.el8.s390x.rpm\n\nx86_64:\nbpftool-4.18.0-348.el8.x86_64.rpm\nbpftool-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-4.18.0-348.el8.x86_64.rpm\nkernel-core-4.18.0-348.el8.x86_64.rpm\nkernel-cross-headers-4.18.0-348.el8.x86_64.rpm\nkernel-debug-4.18.0-348.el8.x86_64.rpm\nkernel-debug-core-4.18.0-348.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debug-devel-4.18.0-348.el8.x86_64.rpm\nkernel-debug-modules-4.18.0-348.el8.x86_64.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm\nkernel-devel-4.18.0-348.el8.x86_64.rpm\nkernel-headers-4.18.0-348.el8.x86_64.rpm\nkernel-modules-4.18.0-348.el8.x86_64.rpm\nkernel-modules-extra-4.18.0-348.el8.x86_64.rpm\nkernel-tools-4.18.0-348.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-tools-libs-4.18.0-348.el8.x86_64.rpm\nperf-4.18.0-348.el8.x86_64.rpm\nperf-debuginfo-4.18.0-348.el8.x86_64.rpm\npython3-perf-4.18.0-348.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm\n\nRed Hat Enterprise Linux CRB (v. 8):\n\naarch64:\nbpftool-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-tools-libs-devel-4.18.0-348.el8.aarch64.rpm\nperf-debuginfo-4.18.0-348.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm\n\nppc64le:\nbpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-libs-devel-4.18.0-348.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-348.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-tools-libs-devel-4.18.0-348.el8.x86_64.rpm\nperf-debuginfo-4.18.0-348.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYYrdRdzjgjWX9erEAQhs0w//as9X4T+FCf3TAbcNIStxlOK6fbJoAlST\nFrgNJnRH3RmT+VxRSLWZcsJQf78kudeJWtMezbGSVREfhCMBCGhKZ7mvVp5P7J8l\nbobmdaap3hqkPqq66VuKxGuS+6j0rXXgGQH034yzoX+L/lx6KV9qdAnZZO+7kWcy\nSfX0GkLg0ARDMfsoUKwVmeUeNLhPlJ4ZH2rBdZ4FhjyEAG/5yL9JwU/VNReWHjhW\nHgarTuSnFR3vLQDKyjMIEEiBPOI162hS2j3Ba/A/1hJ70HOjloJnd0eWYGxSuIfC\nDRrzlacFNAzBPZsbRFi1plXrHh5LtNoBBWjl+xyb6jRsB8eXgS+WhzUhOXGUv01E\nlJTwFy5Kz71d+cAhRXgmz5gVgWuoNJw8AEImefWcy4n0EEK55vdFe0Sl7BfZiwpD\nJhx97He6OurNnLrYyJJ0+TsU1L33794Ag2AJZnN1PLFUyrKKNlD1ZWtdsJg99klK\ndQteUTnnUhgDG5Tqulf0wX19BEkLd/O6CRyGueJcV4h4PFpSoWOh5Yy/BlokFzc8\nzf14PjuVueIodaIUXtK+70Zmw7tg09Dx5Asyfuk5hWFPYv856nHlDn7PT724CU8v\n1cp96h1IjLR6cF17NO2JCcbU0XZEW+aCkGkPcsY8DhBmaZqxUxXObvTD80Mm7EvN\n+PuV5cms0sE=2UUA\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nFor Red Hat OpenShift Logging 5.3, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
},
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "165296"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "164967"
}
],
"trust": 1.8
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-23133",
"trust": 2.0
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/2",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/04/18/2",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/3",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/4",
"trust": 1.1
},
{
"db": "VULMON",
"id": "CVE-2021-23133",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163249",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163251",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163262",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163291",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163301",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164875",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165296",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164837",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164967",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "165296"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"id": "VAR-202104-1571",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.625
},
"last_update_date": "2024-07-23T20:25:58.423000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-23133 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-362",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b166a20b07382b8bc1dcee2a448715c9c2c81b5b"
},
{
"trust": 1.1,
"url": "https://www.openwall.com/lists/oss-security/2021/04/18/2"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/1"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/2"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/3"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/4"
},
{
"trust": 1.0,
"url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00019.html"
},
{
"trust": 1.0,
"url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00020.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cux2ca63453g34c6kyvbljxjxearzi2x/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/paeq3h6hkno6kucgrzvysfsageux23jl/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/xzashzvcofj4vu2i3bn5w5ephwjq7qwx/"
},
{
"trust": 1.0,
"url": "https://security.netapp.com/advisory/ntap-20210611-0008/"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23133"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26147"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24588"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24586"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26145"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24587"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26141"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26139"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3609"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33200"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31829"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26143"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24504"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3600"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-20239"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26144"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3679"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-36158"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3635"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-31829"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26145"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-36386"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-33200"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-29650"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3573"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-29368"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-20194"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24586"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26147"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-31916"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26141"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3348"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-28950"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24588"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26140"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-31440"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26146"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-29646"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-29155"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3732"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-0129"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3489"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-29660"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24587"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-26139"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-28971"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24502"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24503"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3659"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3564"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-0427"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-23133"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32399"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3506"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23134"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31440"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-27777"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3543"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-4997-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26144"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24504"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20239"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20194"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0129"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28950"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26143"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26140"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36386"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29660"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28971"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36158"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26146"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35448"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20284"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3487"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36312"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14615"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33033"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20197"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/362.html"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/paeq3h6hkno6kucgrzvysfsageux23jl/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/cux2ca63453g34c6kyvbljxjxearzi2x/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xzashzvcofj4vu2i3bn5w5ephwjq7qwx/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q2/110"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-23133"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.11.0-1010.10"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.11.0-1011.11"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.11.0-1012.13"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.11.0-1011.12"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.11.0-1009.9"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.11.0-22.23"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.8/5.8.0-1033.34~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.8/5.8.0-1036.38~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25670"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.8.0-1029.32"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.8.0-1035.37"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.8.0-59.66"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25671"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.8.0-1038.40"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.8.0-1036.38"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25673"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.8/5.8.0-59.66~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.8.0-1030.32"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-5.8/5.8.0-1035.37~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-5.8/5.8.0-1038.40~20.04.1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4999-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.8.0-1033.34"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3600"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1075.83"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5003-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-4.15/4.15.0-1103.116"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-dell300x/4.15.0-1022.26"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1106.113"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-4.15/4.15.0-1118.131"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.15.0-1089.94"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-147.151"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1106.115"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5000-2"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5000-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.4.0-1041.42"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4997-2"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4140"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5137"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4356"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27777"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33194"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4627"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "165296"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "165296"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-04-22T00:00:00",
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"date": "2021-06-23T15:33:13",
"db": "PACKETSTORM",
"id": "163249"
},
{
"date": "2021-06-23T15:35:21",
"db": "PACKETSTORM",
"id": "163251"
},
{
"date": "2021-06-23T15:48:14",
"db": "PACKETSTORM",
"id": "163262"
},
{
"date": "2021-06-27T12:22:22",
"db": "PACKETSTORM",
"id": "163291"
},
{
"date": "2021-06-28T16:22:26",
"db": "PACKETSTORM",
"id": "163301"
},
{
"date": "2021-11-10T17:10:23",
"db": "PACKETSTORM",
"id": "164875"
},
{
"date": "2021-12-15T15:27:05",
"db": "PACKETSTORM",
"id": "165296"
},
{
"date": "2021-11-10T17:04:39",
"db": "PACKETSTORM",
"id": "164837"
},
{
"date": "2021-11-15T17:25:56",
"db": "PACKETSTORM",
"id": "164967"
},
{
"date": "2021-04-22T18:15:08.123000",
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-05-10T00:00:00",
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"date": "2023-11-07T03:30:47.290000",
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
}
],
"trust": 0.5
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Ubuntu Security Notice USN-4997-1",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "arbitrary",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
}
],
"trust": 0.5
}
}
VAR-202105-1451
Vulnerability from variot - Updated: 2024-07-23 20:21An issue was discovered in Linux: KVM through Improper handling of VM_IO|VM_PFNMAP vmas in KVM can bypass RO checks and can lead to pages being freed while still accessible by the VMM and guest. This allows users with the ability to start and control a VM to read/write random pages of memory and can result in local privilege escalation. Arch Linux is an application system of Arch open source. A lightweight and flexible Linux® distribution that tries to keep it simple. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: kernel security update Advisory ID: RHSA-2021:3812-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:3812 Issue date: 2021-10-12 CVE Names: CVE-2021-3653 CVE-2021-3656 CVE-2021-22543 CVE-2021-22555 CVE-2021-37576 ==================================================================== 1. Summary:
An update for kernel is now available for Red Hat Enterprise Linux 7.6 Advanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update Support, and Red Hat Enterprise Linux 7.6 Update Services for SAP Solutions.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.6) - noarch, x86_64 Red Hat Enterprise Linux Server E4S (v. 7.6) - noarch, ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.6) - noarch, x86_64
- Description:
The kernel packages contain the Linux kernel, the core of any Linux operating system.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Package List:
Red Hat Enterprise Linux Server AUS (v. 7.6):
Source: kernel-3.10.0-957.84.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-957.84.1.el7.noarch.rpm kernel-doc-3.10.0-957.84.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-957.84.1.el7.x86_64.rpm kernel-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm kernel-devel-3.10.0-957.84.1.el7.x86_64.rpm kernel-headers-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-957.84.1.el7.x86_64.rpm perf-3.10.0-957.84.1.el7.x86_64.rpm perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm python-perf-3.10.0-957.84.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.6):
Source: kernel-3.10.0-957.84.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-957.84.1.el7.noarch.rpm kernel-doc-3.10.0-957.84.1.el7.noarch.rpm
ppc64le: kernel-3.10.0-957.84.1.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debug-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-957.84.1.el7.ppc64le.rpm kernel-devel-3.10.0-957.84.1.el7.ppc64le.rpm kernel-headers-3.10.0-957.84.1.el7.ppc64le.rpm kernel-tools-3.10.0-957.84.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm kernel-tools-libs-3.10.0-957.84.1.el7.ppc64le.rpm perf-3.10.0-957.84.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm python-perf-3.10.0-957.84.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm
x86_64: kernel-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm kernel-devel-3.10.0-957.84.1.el7.x86_64.rpm kernel-headers-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-957.84.1.el7.x86_64.rpm perf-3.10.0-957.84.1.el7.x86_64.rpm perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm python-perf-3.10.0-957.84.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.6):
Source: kernel-3.10.0-957.84.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-957.84.1.el7.noarch.rpm kernel-doc-3.10.0-957.84.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-957.84.1.el7.x86_64.rpm kernel-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm kernel-devel-3.10.0-957.84.1.el7.x86_64.rpm kernel-headers-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-957.84.1.el7.x86_64.rpm perf-3.10.0-957.84.1.el7.x86_64.rpm perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm python-perf-3.10.0-957.84.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.6):
x86_64: kernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-957.84.1.el7.x86_64.rpm perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.6):
ppc64le: kernel-debug-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debug-devel-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-957.84.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-957.84.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm
x86_64: kernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-957.84.1.el7.x86_64.rpm perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. 7.6):
x86_64: kernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-957.84.1.el7.x86_64.rpm perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-3653 https://access.redhat.com/security/cve/CVE-2021-3656 https://access.redhat.com/security/cve/CVE-2021-22543 https://access.redhat.com/security/cve/CVE-2021-22555 https://access.redhat.com/security/cve/CVE-2021-37576 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYWWkrNzjgjWX9erEAQgeVg//eBUd0HQYAc3u6Nc1nBOOBM+SWHAI6rwT SWbLidMennvaCylOWpdL23fkpPkhCRyj5fbXen2dQGtUop2sEm6nmVLFW6cQO8Yv rfXN3wnrW8P0Ah3QtWeV1O22ah8WzcgeML96xyUl3K7yJSY+evXYYBUM7Q8nbzW1 0K7VJm79SxRsszcNq3R/mG5MuPJV+cSnSlG+1cqK+A+YRVKDIE/7vWiTS1z9GDlY /DIlKzvEcsyaOFnKYg+RQRQ5HCB5Hp2JNjRrz5F+X+KJRoT6KTDlbFY/a34FjYWA fUN3pA0aQtdGgQsAB6+T+pVGTYq0vRWPOXhnN2OaN7a95OwbVC9DmiQXwjLQnn2C rvU52ExQP6Jf+WfNHo7yfFMgR1Eyihkc+/zf0qPQomuEqMjEfqyMXOoPsCoXotGs OBJEVIYvBmA7CYif9L0cQRNJyTFfv1IcXKgP2UgrGdChQnLBz/ijVUU17HcU783R G6hXh2+Ojllbt5/rf2PQfFZYjswIVuGe9/You+T6qQXJ+zInz1GOi92+Vwv+gEGF KNLqZJ1nvB1xOm60noCBhnke5vPbzFA+YNGSJnc0tLxZ6Dek891Qb1Eq1f+rwR/9 wfCEgECYLoGvlc2RXC12m13BLUTL+VjVyc0KmG/O9vjFD42EzeFo8rpuXjVsF7QJ S627544rR1c=4NQZ -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.3 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Security fixes:
-
nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name (CVE-2021-23017)
-
redis: Lua scripts can overflow the heap-based Lua stack (CVE-2021-32626)
-
redis: Integer overflow issue with Streams (CVE-2021-32627)
-
redis: Integer overflow bug in the ziplist data structure (CVE-2021-32628)
-
redis: Integer overflow issue with intsets (CVE-2021-32687)
-
redis: Integer overflow issue with strings (CVE-2021-41099)
-
redis: Out of bounds read in lua debugger protocol parser (CVE-2021-32672)
-
redis: Denial of service via Redis Standard Protocol (RESP) request (CVE-2021-32675)
-
helm: information disclosure vulnerability (CVE-2021-32690)
Bug fixes:
-
KUBE-API: Support move agent to different cluster in the same namespace (BZ# 1977358)
-
Add columns to the Agent CRD list (BZ# 1977398)
-
ClusterDeployment controller watches all Secrets from all namespaces (BZ# 1986081)
-
RHACM 2.3.3 images (BZ# 1999365)
-
Workaround for Network Manager not supporting nmconnections priority (BZ# 2001294)
-
create cluster page empty in Safary Browser (BZ# 2002280)
-
Compliance state doesn't get updated after fixing the issue causing initially the policy not being able to update the managed object (BZ# 2002667)
-
Overview page displays VMware based managed cluster as other (BZ# 2004188)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace 1977398 - [4.8.0] [master] Add columns to the Agent CRD list 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1986081 - [4.8.0] ClusterDeployment controller watches all Secrets from all namespaces 1999365 - RHACM 2.3.3 images 2001294 - [4.8.0] Workaround for Network Manager not supporting nmconnections priority 2002280 - create cluster page empty in Safary Browser 2002667 - Compliance state doesn't get updated after fixing the issue causing initially the policy not being able to update the managed object 2004188 - Overview page displays VMware based managed cluster as other 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings
- These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
Bug Fix(es):
- Rebase package(s) to version: 1.2.23
Highlights, important fixes, or notable enhancements:
-
imgbase should not copy the selinux binary policy file (BZ# 1979624) (BZ#1989397)
-
RHV-H has been rebased on Red Hat Enterprise Linux 8.4 Batch #2. (BZ#1975177)
-
8) - aarch64, noarch, ppc64le, s390x, x86_64
Bug Fix(es):
-
Urgent: Missing dptf_power.ko module in RHEL8 (BZ#1968381)
-
[mlx5] kdump over NFS fails: mlx5 driver gives error "Stop room 95 is bigger than the SQ size 64" (BZ#1969909)
-
BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 in bluetooth hci_error_reset on intel-tigerlake-h01 (BZ#1972564)
-
Update CIFS to kernel 5.10 (BZ#1973637)
-
Backport "tick/nohz: Conditionally restart tick on idle exit" to RHEL 8.5 (BZ#1978710)
-
Significant performance drop starting on kernel-4.18.0-277 visible on mmap benchmark (BZ#1980314)
-
Inaccessible NFS server overloads clients (native_queued_spin_lock_slowpath connotation?) (BZ#1980613)
-
[RHEL8.4 BUG],RialtoMLK, I915 graphic driver failed to boot with one new 120HZ panel (BZ#1981250)
-
act_ct: subject to DNAT tuple collision (BZ#1982494)
Enhancement(s):
-
[Lenovo 8.5 FEAT] drivers/nvme - Update to the latest upstream (BZ#1965415)
-
7.3) - x86_64
-
Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.28. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2021:3263
Space precludes documenting all of the container images in this advisory.
Bug Fix(es):
-
Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 (BZ#1956462)
-
OCP IPI Publish Internal - GCP: Load Balancer service with External Traffic Policy as Local is not working (BZ#1971669)
-
[4.7] Unable to attach Vsphere volume shows the error "failed to get canonical path" (BZ#1973766)
-
oc logs doesn't work with piepeline builds (BZ#1974264)
-
"provisioned registration errors" cannot be reported (BZ#1976924)
-
AWS Elastic IP permissions are incorrectly required (BZ#1981553)
-
Memory consumption (container_memory_rss) steadily growing for /system.slice/kubelet.service when FIPS enabled [ocp 4.7] (BZ#1981580)
-
Problematic Deployment creates infinite number Replicasets causing etcd to reach quota limit (BZ#1981775)
-
Size of the hostname was preventing proper DNS resolution of the worker node names (BZ#1983695)
-
(release-4.7) Insights status card shows nothing when 0 issues found (BZ#1986724)
-
drop-icmp pod blocks direct SSH access to cluster nodes (BZ#1988426)
-
Editing a Deployment drops annotations (BZ#1989642)
-
[Kuryr][4.7] Duplicated egress rule for service network in knp object (BZ#1990175)
-
Update failed - ovn-nbctl: duplicate nexthop for the same ECMP route (BZ#1991445)
-
Unable to install a zVM hosted OCP 4.7.24 on Z Cluster based on new RHCOS 47 RHEL 8.4 based build (BZ#1992240)
-
alerts: SystemMemoryExceedsReservation triggers too quickly (BZ#1992687)
-
failed to start cri-o service due to /usr/libexec/crio/conmon is missing (BZ#1993386)
-
Thanos build failure: vendor/ ignored (BZ#1994123)
-
Ipv6 IP addresses are not accepted for whitelisting (BZ#1994645)
-
upgrade from 4.6 to 4.7 to 4.8 with mcp worker "paused=true", crio report "panic: close of closed channel" which lead to a master Node go into Restart loop (BZ#1994729)
-
linuxptp-daemon crash on 4.8 (BZ#1995579)
-
long living clusters may fail to upgrade because of an invalid conmon path (BZ#1995810)
For more details about the security issue(s), refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.28-x86_64
The image digest is sha256:b3f38d58057a12b0477bf28971390db3e3391ce1af8ac06e35d0aa9e8d8e5966
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.28-s390x
The image digest is sha256:30c2011f6d84b16960b981a07558f96a55e59a281449d25c5ccc778aaeb2f970
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.28-ppc64le
The image digest is sha256:52ebf0db5a36434357c24a64863025730d2159a94997333f15fbe1444fa88f4f
Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1863446 - [Assisted-4.5-M2] clean all does not remove ConfigMaps and PVC 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1956462 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 1971669 - OCP IPI Publish Internal - GCP: Load Balancer service with External Traffic Policy as Local is not working 1973766 - [4.7] Unable to attach Vsphere volume shows the error "failed to get canonical path" 1974264 - oc logs doesn't work with piepeline builds 1976924 - "provisioned registration errors" cannot be reported 1981553 - AWS Elastic IP permissions are incorrectly required 1981775 - Problematic Deployment creates infinite number Replicasets causing etcd to reach quota limit 1983695 - Size of the hostname was preventing proper DNS resolution of the worker node names 1986724 - (release-4.7) Insights status card shows nothing when 0 issues found 1988426 - drop-icmp pod blocks direct SSH access to cluster nodes 1989642 - Editing a Deployment drops annotations 1990175 - [Kuryr][4.7] Duplicated egress rule for service network in knp object 1991445 - Update failed - ovn-nbctl: duplicate nexthop for the same ECMP route 1992240 - Unable to install a zVM hosted OCP 4.7.24 on Z Cluster based on new RHCOS 47 RHEL 8.4 based build 1992687 - alerts: SystemMemoryExceedsReservation triggers too quickly 1993386 - failed to start cri-o service due to /usr/libexec/crio/conmon is missing 1994123 - Thanos build failure: vendor/ ignored 1994645 - Ipv6 IP addresses are not accepted for whitelisting 1994729 - upgrade from 4.6 to 4.7 to 4.8 with mcp worker "paused=true", crio report "panic: close of closed channel" which lead to a master Node go into Restart loop 1995810 - long living clusters may fail to upgrade because of an invalid conmon path 1998112 - Networking issue with vSphere clusters running HW14 and later
-
8.1) - ppc64le, x86_64
-
Description:
This is a kernel live patch module which is automatically loaded by the RPM post-install script to modify the code of a running kernel. ========================================================================== Ubuntu Security Notice USN-5094-1 September 29, 2021
linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-dell300x, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm, linux-oracle, linux-snapdragon vulnerabilities ==========================================================================
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in the Linux kernel. An attacker who could start and control a VM could possibly use this to expose sensitive information or execute arbitrary code. (CVE-2021-22543)
It was discovered that the tracing subsystem in the Linux kernel did not properly keep track of per-cpu ring buffer state. A privileged attacker could use this to cause a denial of service. (CVE-2021-3679)
Alois Wohlschlager discovered that the overlay file system in the Linux kernel did not restrict private clones in some situations. An attacker could use this to expose sensitive information. (CVE-2021-3732)
Alexey Kardashevskiy discovered that the KVM implementation for PowerPC systems in the Linux kernel did not properly validate RTAS arguments in some situations. An attacker in a guest vm could use this to cause a denial of service (host OS crash) or possibly execute arbitrary code. (CVE-2021-37576)
It was discovered that the MAX-3421 host USB device driver in the Linux kernel did not properly handle device removal events. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2021-38204)
It was discovered that the Xilinx 10/100 Ethernet Lite device driver in the Linux kernel could report pointer addresses in some situations. An attacker could use this information to ease the exploitation of another vulnerability. (CVE-2021-38205)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 18.04 LTS: linux-image-4.15.0-1028-dell300x 4.15.0-1028.33 linux-image-4.15.0-1081-oracle 4.15.0-1081.89 linux-image-4.15.0-1100-kvm 4.15.0-1100.102 linux-image-4.15.0-1109-gcp 4.15.0-1109.123 linux-image-4.15.0-1112-aws 4.15.0-1112.119 linux-image-4.15.0-1113-snapdragon 4.15.0-1113.122 linux-image-4.15.0-1124-azure 4.15.0-1124.137 linux-image-4.15.0-159-generic 4.15.0-159.167 linux-image-4.15.0-159-generic-lpae 4.15.0-159.167 linux-image-4.15.0-159-lowlatency 4.15.0-159.167 linux-image-aws-lts-18.04 4.15.0.1112.115 linux-image-azure-lts-18.04 4.15.0.1124.97 linux-image-dell300x 4.15.0.1028.30 linux-image-gcp-lts-18.04 4.15.0.1109.128 linux-image-generic 4.15.0.159.148 linux-image-generic-lpae 4.15.0.159.148 linux-image-kvm 4.15.0.1100.96 linux-image-lowlatency 4.15.0.159.148 linux-image-oracle-lts-18.04 4.15.0.1081.91 linux-image-snapdragon 4.15.0.1113.116 linux-image-virtual 4.15.0.159.148
Ubuntu 16.04 ESM: linux-image-4.15.0-1081-oracle 4.15.0-1081.89~16.04.1 linux-image-4.15.0-1109-gcp 4.15.0-1109.123~16.04.1 linux-image-4.15.0-1112-aws 4.15.0-1112.119~16.04.1 linux-image-4.15.0-1124-azure 4.15.0-1124.137~16.04.1 linux-image-4.15.0-159-generic 4.15.0-159.167~16.04.1 linux-image-4.15.0-159-lowlatency 4.15.0-159.167~16.04.1 linux-image-aws-hwe 4.15.0.1112.103 linux-image-azure 4.15.0.1124.115 linux-image-gcp 4.15.0.1109.110 linux-image-generic-hwe-16.04 4.15.0.159.152 linux-image-gke 4.15.0.1109.110 linux-image-lowlatency-hwe-16.04 4.15.0.159.152 linux-image-oem 4.15.0.159.152 linux-image-oracle 4.15.0.1081.69 linux-image-virtual-hwe-16.04 4.15.0.159.152
Ubuntu 14.04 ESM: linux-image-4.15.0-1124-azure 4.15.0-1124.137~14.04.1 linux-image-azure 4.15.0.1124.97
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202105-1451",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "2021-05-18"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:2021-05-18:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "164478"
},
{
"db": "PACKETSTORM",
"id": "164562"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163768"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163972"
},
{
"db": "PACKETSTORM",
"id": "164028"
},
{
"db": "PACKETSTORM",
"id": "163861"
}
],
"trust": 0.8
},
"cve": "CVE-2021-22543",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 4.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:L/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 4.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"id": "VHN-380980",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULMON",
"availabilityImpact": "PARTIAL",
"baseScore": 4.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"id": "CVE-2021-22543",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "MEDIUM",
"trust": 0.1,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:L/Au:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-22543",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-380980",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-22543",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "An issue was discovered in Linux: KVM through Improper handling of VM_IO|VM_PFNMAP vmas in KVM can bypass RO checks and can lead to pages being freed while still accessible by the VMM and guest. This allows users with the ability to start and control a VM to read/write random pages of memory and can result in local privilege escalation. Arch Linux is an application system of Arch open source. A lightweight and flexible Linux\u00ae distribution that tries to keep it simple. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: kernel security update\nAdvisory ID: RHSA-2021:3812-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:3812\nIssue date: 2021-10-12\nCVE Names: CVE-2021-3653 CVE-2021-3656 CVE-2021-22543\n CVE-2021-22555 CVE-2021-37576\n====================================================================\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7.6\nAdvanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.6 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6) - noarch, x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.6) - noarch, ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.6) - noarch, x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6):\n\nSource:\nkernel-3.10.0-957.84.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-957.84.1.el7.noarch.rpm\nkernel-doc-3.10.0-957.84.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-devel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-headers-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-957.84.1.el7.x86_64.rpm\nperf-3.10.0-957.84.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.6):\n\nSource:\nkernel-3.10.0-957.84.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-957.84.1.el7.noarch.rpm\nkernel-doc-3.10.0-957.84.1.el7.noarch.rpm\n\nppc64le:\nkernel-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debug-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-devel-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-headers-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-tools-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-957.84.1.el7.ppc64le.rpm\nperf-3.10.0-957.84.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\npython-perf-3.10.0-957.84.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\n\nx86_64:\nkernel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-devel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-headers-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-957.84.1.el7.x86_64.rpm\nperf-3.10.0-957.84.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.6):\n\nSource:\nkernel-3.10.0-957.84.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-957.84.1.el7.noarch.rpm\nkernel-doc-3.10.0-957.84.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-devel-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-headers-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-957.84.1.el7.x86_64.rpm\nperf-3.10.0-957.84.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6):\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-957.84.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6):\n\nppc64le:\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-957.84.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.ppc64le.rpm\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-957.84.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6):\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-957.84.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-957.84.1.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-3653\nhttps://access.redhat.com/security/cve/CVE-2021-3656\nhttps://access.redhat.com/security/cve/CVE-2021-22543\nhttps://access.redhat.com/security/cve/CVE-2021-22555\nhttps://access.redhat.com/security/cve/CVE-2021-37576\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYWWkrNzjgjWX9erEAQgeVg//eBUd0HQYAc3u6Nc1nBOOBM+SWHAI6rwT\nSWbLidMennvaCylOWpdL23fkpPkhCRyj5fbXen2dQGtUop2sEm6nmVLFW6cQO8Yv\nrfXN3wnrW8P0Ah3QtWeV1O22ah8WzcgeML96xyUl3K7yJSY+evXYYBUM7Q8nbzW1\n0K7VJm79SxRsszcNq3R/mG5MuPJV+cSnSlG+1cqK+A+YRVKDIE/7vWiTS1z9GDlY\n/DIlKzvEcsyaOFnKYg+RQRQ5HCB5Hp2JNjRrz5F+X+KJRoT6KTDlbFY/a34FjYWA\nfUN3pA0aQtdGgQsAB6+T+pVGTYq0vRWPOXhnN2OaN7a95OwbVC9DmiQXwjLQnn2C\nrvU52ExQP6Jf+WfNHo7yfFMgR1Eyihkc+/zf0qPQomuEqMjEfqyMXOoPsCoXotGs\nOBJEVIYvBmA7CYif9L0cQRNJyTFfv1IcXKgP2UgrGdChQnLBz/ijVUU17HcU783R\nG6hXh2+Ojllbt5/rf2PQfFZYjswIVuGe9/You+T6qQXJ+zInz1GOi92+Vwv+gEGF\nKNLqZJ1nvB1xOm60noCBhnke5vPbzFA+YNGSJnc0tLxZ6Dek891Qb1Eq1f+rwR/9\nwfCEgECYLoGvlc2RXC12m13BLUTL+VjVyc0KmG/O9vjFD42EzeFo8rpuXjVsF7QJ\nS627544rR1c=4NQZ\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.3 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with\nsecurity policy built in. \n\nSecurity fixes: \n\n* nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a\npointer to a root domain name (CVE-2021-23017)\n\n* redis: Lua scripts can overflow the heap-based Lua stack (CVE-2021-32626)\n\n* redis: Integer overflow issue with Streams (CVE-2021-32627)\n\n* redis: Integer overflow bug in the ziplist data structure\n(CVE-2021-32628)\n\n* redis: Integer overflow issue with intsets (CVE-2021-32687)\n\n* redis: Integer overflow issue with strings (CVE-2021-41099)\n\n* redis: Out of bounds read in lua debugger protocol parser\n(CVE-2021-32672)\n\n* redis: Denial of service via Redis Standard Protocol (RESP) request\n(CVE-2021-32675)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\nBug fixes:\n\n* KUBE-API: Support move agent to different cluster in the same namespace\n(BZ# 1977358)\n\n* Add columns to the Agent CRD list (BZ# 1977398)\n\n* ClusterDeployment controller watches all Secrets from all namespaces (BZ#\n1986081)\n\n* RHACM 2.3.3 images (BZ# 1999365)\n\n* Workaround for Network Manager not supporting nmconnections priority (BZ#\n2001294)\n\n* create cluster page empty in Safary Browser (BZ# 2002280)\n\n* Compliance state doesn\u0027t get updated after fixing the issue causing\ninitially the policy not being able to update the managed object (BZ#\n2002667)\n\n* Overview page displays VMware based managed cluster as other (BZ#\n2004188)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace\n1977398 - [4.8.0] [master] Add columns to the Agent CRD list\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1986081 - [4.8.0] ClusterDeployment controller watches all Secrets from all namespaces\n1999365 - RHACM 2.3.3 images\n2001294 - [4.8.0] Workaround for Network Manager not supporting nmconnections priority\n2002280 - create cluster page empty in Safary Browser\n2002667 - Compliance state doesn\u0027t get updated after fixing the issue causing initially the policy not being able to update the managed object\n2004188 - Overview page displays VMware based managed cluster as other\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n\n5. \nThese packages include redhat-release-virtualization-host, ovirt-node, and\nrhev-hypervisor. RHVH features a Cockpit user interface for\nmonitoring the host\u0027s resources and performing administrative tasks. \n\nBug Fix(es):\n\n* Rebase package(s) to version: 1.2.23\n\nHighlights, important fixes, or notable enhancements: \n\n* imgbase should not copy the selinux binary policy file (BZ# 1979624)\n(BZ#1989397)\n\n* RHV-H has been rebased on Red Hat Enterprise Linux 8.4 Batch #2. \n(BZ#1975177)\n\n4. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nBug Fix(es):\n\n* Urgent: Missing dptf_power.ko module in RHEL8 (BZ#1968381)\n\n* [mlx5] kdump over NFS fails: mlx5 driver gives error \"Stop room 95 is\nbigger than the SQ size 64\" (BZ#1969909)\n\n* BUG: unable to handle kernel NULL pointer dereference at 0000000000000000\nin bluetooth hci_error_reset on intel-tigerlake-h01 (BZ#1972564)\n\n* Update CIFS to kernel 5.10 (BZ#1973637)\n\n* Backport \"tick/nohz: Conditionally restart tick on idle exit\" to RHEL 8.5\n(BZ#1978710)\n\n* Significant performance drop starting on kernel-4.18.0-277 visible on\nmmap benchmark (BZ#1980314)\n\n* Inaccessible NFS server overloads clients\n(native_queued_spin_lock_slowpath connotation?) (BZ#1980613)\n\n* [RHEL8.4 BUG],RialtoMLK, I915 graphic driver failed to boot with one new\n120HZ panel (BZ#1981250)\n\n* act_ct: subject to DNAT tuple collision (BZ#1982494)\n\nEnhancement(s):\n\n* [Lenovo 8.5 FEAT] drivers/nvme - Update to the latest upstream\n(BZ#1965415)\n\n4. 7.3) - x86_64\n\n3. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.28. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2021:3263\n\nSpace precludes documenting all of the container images in this advisory. \n\nBug Fix(es):\n\n* Machine Config Operator degrades during cluster update with failed to\nconvert Ignition config spec v2 to v3 (BZ#1956462)\n\n* OCP IPI Publish Internal - GCP: Load Balancer service with External\nTraffic Policy as Local is not working (BZ#1971669)\n\n* [4.7] Unable to attach Vsphere volume shows the error \"failed to get\ncanonical path\" (BZ#1973766)\n\n* oc logs doesn\u0027t work with piepeline builds (BZ#1974264)\n\n* \"provisioned registration errors\" cannot be reported (BZ#1976924)\n\n* AWS Elastic IP permissions are incorrectly required (BZ#1981553)\n\n* Memory consumption (container_memory_rss) steadily growing for\n/system.slice/kubelet.service when FIPS enabled [ocp 4.7] (BZ#1981580)\n\n* Problematic Deployment creates infinite number Replicasets causing etcd\nto reach quota limit (BZ#1981775)\n\n* Size of the hostname was preventing proper DNS resolution of the worker\nnode names (BZ#1983695)\n\n* (release-4.7) Insights status card shows nothing when 0 issues found\n(BZ#1986724)\n\n* drop-icmp pod blocks direct SSH access to cluster nodes (BZ#1988426)\n\n* Editing a Deployment drops annotations (BZ#1989642)\n\n* [Kuryr][4.7] Duplicated egress rule for service network in knp object\n(BZ#1990175)\n\n* Update failed - ovn-nbctl: duplicate nexthop for the same ECMP route\n(BZ#1991445)\n\n* Unable to install a zVM hosted OCP 4.7.24 on Z Cluster based on new RHCOS\n47 RHEL 8.4 based build (BZ#1992240)\n\n* alerts: SystemMemoryExceedsReservation triggers too quickly (BZ#1992687)\n\n* failed to start cri-o service due to /usr/libexec/crio/conmon is missing\n(BZ#1993386)\n\n* Thanos build failure: vendor/ ignored (BZ#1994123)\n\n* Ipv6 IP addresses are not accepted for whitelisting (BZ#1994645)\n\n* upgrade from 4.6 to 4.7 to 4.8 with mcp worker \"paused=true\", crio\nreport \"panic: close of closed channel\" which lead to a master Node go into\nRestart loop (BZ#1994729)\n\n* linuxptp-daemon crash on 4.8 (BZ#1995579)\n\n* long living clusters may fail to upgrade because of an invalid conmon\npath (BZ#1995810)\n\nFor more details about the security issue(s), refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.28-x86_64\n\nThe image digest is\nsha256:b3f38d58057a12b0477bf28971390db3e3391ce1af8ac06e35d0aa9e8d8e5966\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.28-s390x\n\nThe image digest is\nsha256:30c2011f6d84b16960b981a07558f96a55e59a281449d25c5ccc778aaeb2f970\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.28-ppc64le\n\nThe image digest is\nsha256:52ebf0db5a36434357c24a64863025730d2159a94997333f15fbe1444fa88f4f\n\nInstructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1863446 - [Assisted-4.5-M2] clean all does not remove ConfigMaps and PVC\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1956462 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1971669 - OCP IPI Publish Internal - GCP: Load Balancer service with External Traffic Policy as Local is not working\n1973766 - [4.7] Unable to attach Vsphere volume shows the error \"failed to get canonical path\"\n1974264 - oc logs doesn\u0027t work with piepeline builds\n1976924 - \"provisioned registration errors\" cannot be reported\n1981553 - AWS Elastic IP permissions are incorrectly required\n1981775 - Problematic Deployment creates infinite number Replicasets causing etcd to reach quota limit\n1983695 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1986724 - (release-4.7) Insights status card shows nothing when 0 issues found\n1988426 - drop-icmp pod blocks direct SSH access to cluster nodes\n1989642 - Editing a Deployment drops annotations\n1990175 - [Kuryr][4.7] Duplicated egress rule for service network in knp object\n1991445 - Update failed - ovn-nbctl: duplicate nexthop for the same ECMP route\n1992240 - Unable to install a zVM hosted OCP 4.7.24 on Z Cluster based on new RHCOS 47 RHEL 8.4 based build\n1992687 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1993386 - failed to start cri-o service due to /usr/libexec/crio/conmon is missing\n1994123 - Thanos build failure: vendor/ ignored\n1994645 - Ipv6 IP addresses are not accepted for whitelisting\n1994729 - upgrade from 4.6 to 4.7 to 4.8 with mcp worker \"paused=true\", crio report \"panic: close of closed channel\" which lead to a master Node go into Restart loop\n1995810 - long living clusters may fail to upgrade because of an invalid conmon path\n1998112 - Networking issue with vSphere clusters running HW14 and later\n\n5. 8.1) - ppc64le, x86_64\n\n3. Description:\n\nThis is a kernel live patch module which is automatically loaded by the RPM\npost-install script to modify the code of a running kernel. ==========================================================================\nUbuntu Security Notice USN-5094-1\nSeptember 29, 2021\n\nlinux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15,\nlinux-dell300x, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm,\nlinux-oracle, linux-snapdragon vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. An attacker who could start and\ncontrol a VM could possibly use this to expose sensitive information or\nexecute arbitrary code. (CVE-2021-22543)\n\nIt was discovered that the tracing subsystem in the Linux kernel did not\nproperly keep track of per-cpu ring buffer state. A privileged attacker\ncould use this to cause a denial of service. (CVE-2021-3679)\n\nAlois Wohlschlager discovered that the overlay file system in the Linux\nkernel did not restrict private clones in some situations. An attacker\ncould use this to expose sensitive information. (CVE-2021-3732)\n\nAlexey Kardashevskiy discovered that the KVM implementation for PowerPC\nsystems in the Linux kernel did not properly validate RTAS arguments in\nsome situations. An attacker in a guest vm could use this to cause a denial\nof service (host OS crash) or possibly execute arbitrary code. \n(CVE-2021-37576)\n\nIt was discovered that the MAX-3421 host USB device driver in the Linux\nkernel did not properly handle device removal events. A physically\nproximate attacker could use this to cause a denial of service (system\ncrash). (CVE-2021-38204)\n\nIt was discovered that the Xilinx 10/100 Ethernet Lite device driver in the\nLinux kernel could report pointer addresses in some situations. An attacker\ncould use this information to ease the exploitation of another\nvulnerability. (CVE-2021-38205)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.04 LTS:\n linux-image-4.15.0-1028-dell300x 4.15.0-1028.33\n linux-image-4.15.0-1081-oracle 4.15.0-1081.89\n linux-image-4.15.0-1100-kvm 4.15.0-1100.102\n linux-image-4.15.0-1109-gcp 4.15.0-1109.123\n linux-image-4.15.0-1112-aws 4.15.0-1112.119\n linux-image-4.15.0-1113-snapdragon 4.15.0-1113.122\n linux-image-4.15.0-1124-azure 4.15.0-1124.137\n linux-image-4.15.0-159-generic 4.15.0-159.167\n linux-image-4.15.0-159-generic-lpae 4.15.0-159.167\n linux-image-4.15.0-159-lowlatency 4.15.0-159.167\n linux-image-aws-lts-18.04 4.15.0.1112.115\n linux-image-azure-lts-18.04 4.15.0.1124.97\n linux-image-dell300x 4.15.0.1028.30\n linux-image-gcp-lts-18.04 4.15.0.1109.128\n linux-image-generic 4.15.0.159.148\n linux-image-generic-lpae 4.15.0.159.148\n linux-image-kvm 4.15.0.1100.96\n linux-image-lowlatency 4.15.0.159.148\n linux-image-oracle-lts-18.04 4.15.0.1081.91\n linux-image-snapdragon 4.15.0.1113.116\n linux-image-virtual 4.15.0.159.148\n\nUbuntu 16.04 ESM:\n linux-image-4.15.0-1081-oracle 4.15.0-1081.89~16.04.1\n linux-image-4.15.0-1109-gcp 4.15.0-1109.123~16.04.1\n linux-image-4.15.0-1112-aws 4.15.0-1112.119~16.04.1\n linux-image-4.15.0-1124-azure 4.15.0-1124.137~16.04.1\n linux-image-4.15.0-159-generic 4.15.0-159.167~16.04.1\n linux-image-4.15.0-159-lowlatency 4.15.0-159.167~16.04.1\n linux-image-aws-hwe 4.15.0.1112.103\n linux-image-azure 4.15.0.1124.115\n linux-image-gcp 4.15.0.1109.110\n linux-image-generic-hwe-16.04 4.15.0.159.152\n linux-image-gke 4.15.0.1109.110\n linux-image-lowlatency-hwe-16.04 4.15.0.159.152\n linux-image-oem 4.15.0.159.152\n linux-image-oracle 4.15.0.1081.69\n linux-image-virtual-hwe-16.04 4.15.0.159.152\n\nUbuntu 14.04 ESM:\n linux-image-4.15.0-1124-azure 4.15.0-1124.137~14.04.1\n linux-image-azure 4.15.0.1124.97\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22543"
},
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164478"
},
{
"db": "PACKETSTORM",
"id": "164562"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163768"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163972"
},
{
"db": "PACKETSTORM",
"id": "164028"
},
{
"db": "PACKETSTORM",
"id": "163861"
},
{
"db": "PACKETSTORM",
"id": "164331"
},
{
"db": "PACKETSTORM",
"id": "164237"
}
],
"trust": 1.98
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-22543",
"trust": 2.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/06/26/1",
"trust": 1.2
},
{
"db": "PACKETSTORM",
"id": "164589",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164666",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164652",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167858",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164583",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-380980",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22543",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164478",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164562",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163926",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163768",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164469",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163972",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164028",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163861",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164331",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164237",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164478"
},
{
"db": "PACKETSTORM",
"id": "164562"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163768"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163972"
},
{
"db": "PACKETSTORM",
"id": "164028"
},
{
"db": "PACKETSTORM",
"id": "163861"
},
{
"db": "PACKETSTORM",
"id": "164331"
},
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"id": "VAR-202105-1451",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T20:21:39.668000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Red Hat: Important: kernel security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225640 - security advisory"
},
{
"title": "Red Hat: CVE-2021-22543",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-22543"
},
{
"title": "Amazon Linux 2: ALAS2-2021-1699",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1699"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-22543 log"
},
{
"title": "Amazon Linux AMI: ALAS-2021-1539",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2021-1539"
},
{
"title": "Amazon Linux 2: ALAS2KERNEL-5.4-2022-004",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2kernel-5.4-2022-004"
},
{
"title": "Amazon Linux 2: ALAS2KERNEL-5.10-2022-002",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2kernel-5.10-2022-002"
},
{
"title": "CVE-2021-22543",
"trust": 0.1,
"url": "https://github.com/jamesgeeee/cve-2021-22543 "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22543"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-119",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.2,
"url": "https://security.netapp.com/advisory/ntap-20210708-0002/"
},
{
"trust": 1.2,
"url": "https://github.com/google/security-research/security/advisories/ghsa-7wq5-phmq-m584"
},
{
"trust": 1.2,
"url": "https://lists.debian.org/debian-lts-announce/2021/10/msg00010.html"
},
{
"trust": 1.2,
"url": "https://lists.debian.org/debian-lts-announce/2021/12/msg00012.html"
},
{
"trust": 1.2,
"url": "http://www.openwall.com/lists/oss-security/2021/06/26/1"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22543"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4g5ybuvephzyxmkngbz3s6infcteel4e/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/roqixqb7zawi3ksgshr6h5rduwzi775s/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2021-22543"
},
{
"trust": 0.8,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22555"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-22555"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3609"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3609"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-37576"
},
{
"trust": 0.2,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/roqixqb7zawi3ksgshr6h5rduwzi775s/"
},
{
"trust": 0.2,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4g5ybuvephzyxmkngbz3s6infcteel4e/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3653"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32399"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-32399"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/119.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5640"
},
{
"trust": 0.1,
"url": "https://github.com/jamesgeeee/cve-2021-22543"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21670"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32687"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21670"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41099"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25741"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21671"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41099"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21671"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23017"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25741"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32672"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3621"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3621"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3057"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3380"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3262"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2021:3263"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3181"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.15.0-1100.102"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38204"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-dell300x/4.15.0-1028.33"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-4.15/4.15.0-1109.123"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1081.89"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3732"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1113.122"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-4.15/4.15.0-1124.137"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5094-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1112.119"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3679"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-159.167"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5071-1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5071-3"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi-5.4/5.4.0-1043.47~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.4.0-1043.47"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164478"
},
{
"db": "PACKETSTORM",
"id": "164562"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163768"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163972"
},
{
"db": "PACKETSTORM",
"id": "164028"
},
{
"db": "PACKETSTORM",
"id": "163861"
},
{
"db": "PACKETSTORM",
"id": "164331"
},
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164478"
},
{
"db": "PACKETSTORM",
"id": "164562"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163768"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163972"
},
{
"db": "PACKETSTORM",
"id": "164028"
},
{
"db": "PACKETSTORM",
"id": "163861"
},
{
"db": "PACKETSTORM",
"id": "164331"
},
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-05-26T00:00:00",
"db": "VULHUB",
"id": "VHN-380980"
},
{
"date": "2021-05-26T00:00:00",
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"date": "2021-10-12T15:34:44",
"db": "PACKETSTORM",
"id": "164478"
},
{
"date": "2021-10-20T15:45:47",
"db": "PACKETSTORM",
"id": "164562"
},
{
"date": "2021-08-28T13:22:22",
"db": "PACKETSTORM",
"id": "163926"
},
{
"date": "2021-08-10T14:48:02",
"db": "PACKETSTORM",
"id": "163768"
},
{
"date": "2021-10-12T15:33:21",
"db": "PACKETSTORM",
"id": "164469"
},
{
"date": "2021-08-31T15:58:41",
"db": "PACKETSTORM",
"id": "163972"
},
{
"date": "2021-09-02T15:23:31",
"db": "PACKETSTORM",
"id": "164028"
},
{
"date": "2021-08-17T15:14:36",
"db": "PACKETSTORM",
"id": "163861"
},
{
"date": "2021-09-29T14:51:35",
"db": "PACKETSTORM",
"id": "164331"
},
{
"date": "2021-09-22T16:24:38",
"db": "PACKETSTORM",
"id": "164237"
},
{
"date": "2021-05-26T11:15:08.623000",
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-04-01T00:00:00",
"db": "VULHUB",
"id": "VHN-380980"
},
{
"date": "2022-04-01T00:00:00",
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"date": "2024-05-29T20:15:09.870000",
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "164237"
}
],
"trust": 0.1
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2021-3812-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "164478"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "arbitrary",
"sources": [
{
"db": "PACKETSTORM",
"id": "164331"
},
{
"db": "PACKETSTORM",
"id": "164237"
}
],
"trust": 0.2
}
}
VAR-201909-1526
Vulnerability from variot - Updated: 2024-07-23 20:14There is heap-based buffer overflow in kernel, all versions up to, excluding 5.3, in the marvell wifi chip driver in Linux kernel, that allows local users to cause a denial of service(system crash) or possibly execute arbitrary code. Linux Kernel Contains a classic buffer overflow vulnerability.Information is obtained, information is altered, and service operation is disrupted (DoS) There is a possibility of being put into a state. ========================================================================= Ubuntu Security Notice USN-4162-2 October 23, 2019
linux-azure vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in the Linux kernel. This update provides the corresponding updates for the Linux kernel for Microsoft Azure Cloud systems for Ubuntu 14.04 ESM.
It was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not did not handle detach operations correctly, leading to a use-after-free vulnerability. (CVE-2019-14814, CVE-2019-14815, CVE-2019-14816)
Matt Delco discovered that the KVM hypervisor implementation in the Linux kernel did not properly perform bounds checking when handling coalesced MMIO write operations. A local attacker with write access to /dev/kvm could use this to cause a denial of service (system crash). (CVE-2019-14821)
Hui Peng and Mathias Payer discovered that the USB audio driver for the Linux kernel did not properly validate device meta data. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2019-15117)
Hui Peng and Mathias Payer discovered that the USB audio driver for the Linux kernel improperly performed recursion while handling device meta data. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2019-15118)
It was discovered that the Technisat DVB-S/S2 USB device driver in the Linux kernel contained a buffer overread. A physically proximate attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information. (CVE-2019-15505)
Brad Spengler discovered that a Spectre mitigation was improperly implemented in the ptrace susbsystem of the Linux kernel. A local attacker could possibly use this to expose sensitive information. (CVE-2019-15902)
It was discovered that the SMB networking file system implementation in the Linux kernel contained a buffer overread. An attacker could use this to expose sensitive information (kernel memory). (CVE-2019-15918)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: linux-image-4.15.0-1061-azure 4.15.0-1061.66~14.04.1 linux-image-azure 4.15.0.1061.47
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel security and bug fix update Advisory ID: RHSA-2020:0374-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:0374 Issue date: 2020-02-04 CVE Names: CVE-2019-14816 CVE-2019-14895 CVE-2019-14898 CVE-2019-14901 CVE-2019-17133 =====================================================================
- Summary:
An update for kernel is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The kernel packages contain the Linux kernel, the core of any Linux operating system.
Security Fix(es):
-
kernel: heap overflow in mwifiex_update_vs_ie() function of Marvell WiFi driver (CVE-2019-14816)
-
kernel: heap-based buffer overflow in mwifiex_process_country_ie() function in drivers/net/wireless/marvell/mwifiex/sta_ioctl.c (CVE-2019-14895)
-
kernel: heap overflow in marvell/mwifiex/tdls.c (CVE-2019-14901)
-
kernel: buffer overflow in cfg80211_mgd_wext_giwessid in net/wireless/wext-sme.c (CVE-2019-17133)
-
kernel: incomplete fix for race condition between mmget_not_zero()/get_task_mm() and core dumping in CVE-2019-11599 (CVE-2019-14898)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
[Azure][7.8] Include patch "PCI: hv: Avoid use of hv_pci_dev->pci_slot after freeing it" (BZ#1766089)
-
[Hyper-V][RHEL7.8] When accelerated networking is enabled on RedHat, network interface(eth0) moved to new network namespace does not obtain IP address. (BZ#1766093)
-
[Azure][RHEL 7.6] hv_vmbus probe pass-through GPU card failed (BZ#1766097)
-
SMB3: Do not error out on large file transfers if server responds with STATUS_INSUFFICIENT_RESOURCES (BZ#1767621)
-
Since RHEL commit 5330f5d09820 high load can cause dm-multipath path failures (BZ#1770113)
-
Hard lockup in free_one_page()->_raw_spin_lock() because sosreport command is reading from /proc/pagetypeinfo (BZ#1770732)
-
patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() (BZ#1772812)
-
fix compat statfs64() returning EOVERFLOW for when _FILE_OFFSET_BITS=64 (BZ#1775678)
-
Guest crash after load cpuidle-haltpoll driver (BZ#1776289)
-
RHEL 7.7 long I/O stalls with bnx2fc from not masking off scope bits of retry delay value (BZ#1776290)
-
Multiple "mv" processes hung on a gfs2 filesystem (BZ#1777297)
-
Moving Egress IP will result in conntrack sessions being DESTROYED (BZ#1779564)
-
core: backports from upstream (BZ#1780033)
-
kernel BUG at arch/powerpc/platforms/pseries/lpar.c:482! (BZ#1780148)
-
Race between tty_open() and flush_to_ldisc() using the tty_struct->driver_data field. (BZ#1780163)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
ppc64: bpftool-3.10.0-1062.12.1.el7.ppc64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-3.10.0-1062.12.1.el7.ppc64.rpm kernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm kernel-devel-3.10.0-1062.12.1.el7.ppc64.rpm kernel-headers-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.ppc64.rpm perf-3.10.0-1062.12.1.el7.ppc64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm python-perf-3.10.0-1062.12.1.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm
ppc64le: bpftool-3.10.0-1062.12.1.el7.ppc64le.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-devel-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-headers-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.ppc64le.rpm perf-3.10.0-1062.12.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm python-perf-3.10.0-1062.12.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm
s390x: bpftool-3.10.0-1062.12.1.el7.s390x.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-3.10.0-1062.12.1.el7.s390x.rpm kernel-debug-3.10.0-1062.12.1.el7.s390x.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.s390x.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-debuginfo-common-s390x-3.10.0-1062.12.1.el7.s390x.rpm kernel-devel-3.10.0-1062.12.1.el7.s390x.rpm kernel-headers-3.10.0-1062.12.1.el7.s390x.rpm kernel-kdump-3.10.0-1062.12.1.el7.s390x.rpm kernel-kdump-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-kdump-devel-3.10.0-1062.12.1.el7.s390x.rpm perf-3.10.0-1062.12.1.el7.s390x.rpm perf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm python-perf-3.10.0-1062.12.1.el7.s390x.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm
ppc64le: bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-14816 https://access.redhat.com/security/cve/CVE-2019-14895 https://access.redhat.com/security/cve/CVE-2019-14898 https://access.redhat.com/security/cve/CVE-2019-14901 https://access.redhat.com/security/cve/CVE-2019-17133 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXjnG/NzjgjWX9erEAQiZpA/+PrziwQc9nitsDyWqtq556llAnWG2YjEK kzbq/d3Vp+7i0aaOHXNG9b6XDgR8kPSLnb/2tCUBQKmLeWEptgY6s24mXXkiAHry plZ40Xlmca9cjPQCSET7IkQyHlYcUsc9orUT3g1PsZ0uOxPQZ1ivB1utn6nyhbSg 9Az/e/9ai7R++mv4zJ7UDrDzuGPv5SOtyIcfuUyYdbuZO9OrmFsbWCRwG+cVvXJ6 q6uXlIpcWx4H7key9SiboU/VSXXPQ0E5vv1A72biDgCXhm2kYWEJXSwlLH2jJJo7 DfujB4+NSnDVp7Qu0aF/YsEiR9JQfGOOrfuNsmOSdK3Bx3p8LkS4Fd9y3H/fCwjI EOoXerSgeGjB5E/DtH24HKu1FB5ZniDJP69itCIONokq6BltVZsQRvZxpXQdmvpz hTJIkYqnuvrkv2liCc8Dr7P7EK0SBPhwhmcBMcAcPHE8BbOtEkcGzF2f2/p/CQci N0c4UhB2p+eSLq+W4qG4W/ZyyUh2oYdvPjPCrziT1qHOR4ilw9fH9b+jCxmAM7Lh wqj3yMR9YhUrEBRUUokA/wjggmI88u6I8uQatbf6Keqj1v1CykMKF3AEC5qfxwGz hk0YzSh0YK6DfybzNxcZK/skcp0Ga0vD+El/nXFI0WGXB8LsQiOUBgfp1JyAlXT6 IwzrfQ6EsXE= =mofI -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . Please note that the RDS protocol is blacklisted in Ubuntu by default. 7.3) - noarch, x86_64
Bug Fix(es):
-
RHEL7.5 - kernel crashed at xfs_reclaim_inodes_count+0x70/0xa0 (BZ#1795578)
-
Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Bug Fix(es):
-
patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() [kernel-rt] (BZ#1772522)
-
kernel-rt: update to the RHEL7.7.z batch#4 source tree (BZ#1780322)
-
kvm nx_huge_pages_recovery_ratio=0 is needed to meet KVM-RT low latency requirement (BZ#1781157)
-
kernel-rt: hard lockup panic in during execution of CFS bandwidth period timer (BZ#1788057)
Bug Fix(es):
-
[xfstests]: copy_file_range cause corruption on rhel-7 (BZ#1797965)
-
port show-kabi to python3 (BZ#1806926)
-
7.6) - ppc64, ppc64le, x86_64
Bug Fix(es):
-
[PATCH] perf: Fix a race between ring_buffer_detach() and ring_buffer_wakeup() (BZ#1772826)
-
core: backports from upstream (BZ#1780031)
-
Race between tty_open() and flush_to_ldisc() using the tty_struct->driver_data field. (BZ#1780160)
-
[Hyper-V][RHEL7.6]Hyper-V guest waiting indefinitely for RCU callback when removing a mem cgroup (BZ#1783176)
Enhancement(s):
- Selective backport: perf: Sync with upstream v4.16 (BZ#1782752)
4
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-201909-1526",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "data availability services",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.15"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "3.6"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "enterprise linux compute node eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "a800",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.146"
},
{
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.9.194"
},
{
"model": "a220",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.20"
},
{
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.4"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "3.17"
},
{
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.2"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux for power big endian eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6_ppc64"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "8.0"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "30"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.5"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.2"
},
{
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "a700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "fas2750",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fas2720",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "c190",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"model": "messaging realtime grid",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "2.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "29"
},
{
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7"
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.2.17"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.4.194"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.10"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"model": "a320",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.75"
},
{
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "service processor",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "3.16.74"
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.0"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.1"
},
{
"model": "enterprise linux tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "5.0"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.04"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "kernel",
"scope": "lt",
"trust": 0.8,
"vendor": "linux",
"version": "5.3"
},
{
"model": "enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "red hat",
"version": null
},
{
"model": "enterprise mrg",
"scope": null,
"trust": 0.8,
"vendor": "red hat",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.2.17",
"versionStartIncluding": "4.20",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.4.194",
"versionStartIncluding": "3.17",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.9.194",
"versionStartIncluding": "4.5",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.14.146",
"versionStartIncluding": "4.10",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.19.75",
"versionStartIncluding": "4.15",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.16.74",
"versionStartIncluding": "3.6",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:6.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv:7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:5.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:6.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:redhat:virtualization:4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:virtualization:4.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:7.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_tus:7.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:messaging_realtime_grid:2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_compute_node_eus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_big_endian_eus:7.6_ppc64:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:29:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:service_processor:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:data_availability_services:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:a700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:a700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:a320_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:a320:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:c190_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:c190:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:a220_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:a220:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas2720_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas2720:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas2750_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas2750:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:a800_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:a800:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:19.04:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "156602"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "157140"
},
{
"db": "PACKETSTORM",
"id": "156608"
}
],
"trust": 0.6
},
"cve": "CVE-2019-14816",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "HIGH",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Local",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Complete",
"baseScore": 7.2,
"confidentialityImpact": "Complete",
"exploitabilityScore": null,
"id": "CVE-2019-14816",
"impactScore": null,
"integrityImpact": "Complete",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "High",
"trust": 0.8,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "secalert@redhat.com",
"availabilityImpact": "HIGH",
"baseScore": 5.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 1.8,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
"version": "3.0"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.8,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2019-14816",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2019-14816",
"trust": 1.8,
"value": "HIGH"
},
{
"author": "secalert@redhat.com",
"id": "CVE-2019-14816",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-201908-2176",
"trust": 0.6,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "There is heap-based buffer overflow in kernel, all versions up to, excluding 5.3, in the marvell wifi chip driver in Linux kernel, that allows local users to cause a denial of service(system crash) or possibly execute arbitrary code. Linux Kernel Contains a classic buffer overflow vulnerability.Information is obtained, information is altered, and service operation is disrupted (DoS) There is a possibility of being put into a state. =========================================================================\nUbuntu Security Notice USN-4162-2\nOctober 23, 2019\n\nlinux-azure vulnerabilities\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. This update provides the corresponding updates for the Linux\nkernel for Microsoft Azure Cloud systems for Ubuntu 14.04 ESM. \n\nIt was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not\ndid not handle detach operations correctly, leading to a use-after-free\nvulnerability. (CVE-2019-14814,\nCVE-2019-14815, CVE-2019-14816)\n\nMatt Delco discovered that the KVM hypervisor implementation in the Linux\nkernel did not properly perform bounds checking when handling coalesced\nMMIO write operations. A local attacker with write access to /dev/kvm could\nuse this to cause a denial of service (system crash). (CVE-2019-14821)\n\nHui Peng and Mathias Payer discovered that the USB audio driver for the\nLinux kernel did not properly validate device meta data. A physically\nproximate attacker could use this to cause a denial of service (system\ncrash). (CVE-2019-15117)\n\nHui Peng and Mathias Payer discovered that the USB audio driver for the\nLinux kernel improperly performed recursion while handling device meta\ndata. A physically proximate attacker could use this to cause a denial of\nservice (system crash). (CVE-2019-15118)\n\nIt was discovered that the Technisat DVB-S/S2 USB device driver in the\nLinux kernel contained a buffer overread. A physically proximate attacker\ncould use this to cause a denial of service (system crash) or possibly\nexpose sensitive information. (CVE-2019-15505)\n\nBrad Spengler discovered that a Spectre mitigation was improperly\nimplemented in the ptrace susbsystem of the Linux kernel. A local attacker\ncould possibly use this to expose sensitive information. (CVE-2019-15902)\n\nIt was discovered that the SMB networking file system implementation in the\nLinux kernel contained a buffer overread. An attacker could use this to\nexpose sensitive information (kernel memory). (CVE-2019-15918)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n linux-image-4.15.0-1061-azure 4.15.0-1061.66~14.04.1\n linux-image-azure 4.15.0.1061.47\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel security and bug fix update\nAdvisory ID: RHSA-2020:0374-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:0374\nIssue date: 2020-02-04\nCVE Names: CVE-2019-14816 CVE-2019-14895 CVE-2019-14898 \n CVE-2019-14901 CVE-2019-17133 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. \n\nSecurity Fix(es):\n\n* kernel: heap overflow in mwifiex_update_vs_ie() function of Marvell WiFi\ndriver (CVE-2019-14816)\n\n* kernel: heap-based buffer overflow in mwifiex_process_country_ie()\nfunction in drivers/net/wireless/marvell/mwifiex/sta_ioctl.c\n(CVE-2019-14895)\n\n* kernel: heap overflow in marvell/mwifiex/tdls.c (CVE-2019-14901)\n\n* kernel: buffer overflow in cfg80211_mgd_wext_giwessid in\nnet/wireless/wext-sme.c (CVE-2019-17133)\n\n* kernel: incomplete fix for race condition between\nmmget_not_zero()/get_task_mm() and core dumping in CVE-2019-11599\n(CVE-2019-14898)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* [Azure][7.8] Include patch \"PCI: hv: Avoid use of hv_pci_dev-\u003epci_slot\nafter freeing it\" (BZ#1766089)\n\n* [Hyper-V][RHEL7.8] When accelerated networking is enabled on RedHat,\nnetwork interface(eth0) moved to new network namespace does not obtain IP\naddress. (BZ#1766093)\n\n* [Azure][RHEL 7.6] hv_vmbus probe pass-through GPU card failed\n(BZ#1766097)\n\n* SMB3: Do not error out on large file transfers if server responds with\nSTATUS_INSUFFICIENT_RESOURCES (BZ#1767621)\n\n* Since RHEL commit 5330f5d09820 high load can cause dm-multipath path\nfailures (BZ#1770113)\n\n* Hard lockup in free_one_page()-\u003e_raw_spin_lock() because sosreport\ncommand is reading from /proc/pagetypeinfo (BZ#1770732)\n\n* patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() (BZ#1772812)\n\n* fix compat statfs64() returning EOVERFLOW for when _FILE_OFFSET_BITS=64\n(BZ#1775678)\n\n* Guest crash after load cpuidle-haltpoll driver (BZ#1776289)\n\n* RHEL 7.7 long I/O stalls with bnx2fc from not masking off scope bits of\nretry delay value (BZ#1776290)\n\n* Multiple \"mv\" processes hung on a gfs2 filesystem (BZ#1777297)\n\n* Moving Egress IP will result in conntrack sessions being DESTROYED\n(BZ#1779564)\n\n* core: backports from upstream (BZ#1780033)\n\n* kernel BUG at arch/powerpc/platforms/pseries/lpar.c:482! (BZ#1780148)\n\n* Race between tty_open() and flush_to_ldisc() using the\ntty_struct-\u003edriver_data field. (BZ#1780163)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nppc64:\nbpftool-3.10.0-1062.12.1.el7.ppc64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.ppc64.rpm\nperf-3.10.0-1062.12.1.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\npython-perf-3.10.0-1062.12.1.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\n\nppc64le:\nbpftool-3.10.0-1062.12.1.el7.ppc64le.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-devel-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-headers-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.ppc64le.rpm\nperf-3.10.0-1062.12.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\npython-perf-3.10.0-1062.12.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\n\ns390x:\nbpftool-3.10.0-1062.12.1.el7.s390x.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debug-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debuginfo-common-s390x-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-devel-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-headers-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-kdump-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-kdump-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-kdump-devel-3.10.0-1062.12.1.el7.s390x.rpm\nperf-3.10.0-1062.12.1.el7.s390x.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\npython-perf-3.10.0-1062.12.1.el7.s390x.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\n\nppc64le:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-14816\nhttps://access.redhat.com/security/cve/CVE-2019-14895\nhttps://access.redhat.com/security/cve/CVE-2019-14898\nhttps://access.redhat.com/security/cve/CVE-2019-14901\nhttps://access.redhat.com/security/cve/CVE-2019-17133\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXjnG/NzjgjWX9erEAQiZpA/+PrziwQc9nitsDyWqtq556llAnWG2YjEK\nkzbq/d3Vp+7i0aaOHXNG9b6XDgR8kPSLnb/2tCUBQKmLeWEptgY6s24mXXkiAHry\nplZ40Xlmca9cjPQCSET7IkQyHlYcUsc9orUT3g1PsZ0uOxPQZ1ivB1utn6nyhbSg\n9Az/e/9ai7R++mv4zJ7UDrDzuGPv5SOtyIcfuUyYdbuZO9OrmFsbWCRwG+cVvXJ6\nq6uXlIpcWx4H7key9SiboU/VSXXPQ0E5vv1A72biDgCXhm2kYWEJXSwlLH2jJJo7\nDfujB4+NSnDVp7Qu0aF/YsEiR9JQfGOOrfuNsmOSdK3Bx3p8LkS4Fd9y3H/fCwjI\nEOoXerSgeGjB5E/DtH24HKu1FB5ZniDJP69itCIONokq6BltVZsQRvZxpXQdmvpz\nhTJIkYqnuvrkv2liCc8Dr7P7EK0SBPhwhmcBMcAcPHE8BbOtEkcGzF2f2/p/CQci\nN0c4UhB2p+eSLq+W4qG4W/ZyyUh2oYdvPjPCrziT1qHOR4ilw9fH9b+jCxmAM7Lh\nwqj3yMR9YhUrEBRUUokA/wjggmI88u6I8uQatbf6Keqj1v1CykMKF3AEC5qfxwGz\nhk0YzSh0YK6DfybzNxcZK/skcp0Ga0vD+El/nXFI0WGXB8LsQiOUBgfp1JyAlXT6\nIwzrfQ6EsXE=\n=mofI\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. Please note that the RDS protocol is blacklisted in Ubuntu by\ndefault. 7.3) - noarch, x86_64\n\n3. \n\nBug Fix(es):\n\n* RHEL7.5 - kernel crashed at xfs_reclaim_inodes_count+0x70/0xa0\n(BZ#1795578)\n\n4. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nBug Fix(es):\n\n* patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() [kernel-rt]\n(BZ#1772522)\n\n* kernel-rt: update to the RHEL7.7.z batch#4 source tree (BZ#1780322)\n\n* kvm nx_huge_pages_recovery_ratio=0 is needed to meet KVM-RT low latency\nrequirement (BZ#1781157)\n\n* kernel-rt: hard lockup panic in during execution of CFS bandwidth period\ntimer (BZ#1788057)\n\n4. \n\nBug Fix(es):\n\n* [xfstests]: copy_file_range cause corruption on rhel-7 (BZ#1797965)\n\n* port show-kabi to python3 (BZ#1806926)\n\n4. 7.6) - ppc64, ppc64le, x86_64\n\n3. \n\nBug Fix(es):\n\n* [PATCH] perf: Fix a race between ring_buffer_detach() and\nring_buffer_wakeup() (BZ#1772826)\n\n* core: backports from upstream (BZ#1780031)\n\n* Race between tty_open() and flush_to_ldisc() using the\ntty_struct-\u003edriver_data field. (BZ#1780160)\n\n* [Hyper-V][RHEL7.6]Hyper-V guest waiting indefinitely for RCU callback\nwhen removing a mem cgroup (BZ#1783176)\n\nEnhancement(s):\n\n* Selective backport: perf: Sync with upstream v4.16 (BZ#1782752)\n\n4",
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14816"
},
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "PACKETSTORM",
"id": "154948"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "156602"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "157140"
},
{
"db": "PACKETSTORM",
"id": "156608"
}
],
"trust": 2.34
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2019-14816",
"trust": 3.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/08/28/1",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "155212",
"trust": 1.6
},
{
"db": "PACKETSTORM",
"id": "154951",
"trust": 1.6
},
{
"db": "JVNDB",
"id": "JVNDB-2019-009588",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "154897",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "156216",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "157140",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "156608",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "156020",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0415",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3817",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.4252",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3570",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.4346",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0790",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3064",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0766",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3897",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3835",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.4346.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1248",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "154948",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "156213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "156603",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "156602",
"trust": 0.1
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "PACKETSTORM",
"id": "154948"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "156602"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "157140"
},
{
"db": "PACKETSTORM",
"id": "156608"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"id": "VAR-201909-1526",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.25113124
},
"last_update_date": "2024-07-23T20:14:55.872000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "mwifiex: Fix three heap overflow at parsing element in cfg80211_ap_settings",
"trust": 0.8,
"url": "https://github.com/torvalds/linux/commit/7caac62ed598a196d6ddf8d9c121e12e082cac3"
},
{
"title": "Linux Kernel Archives",
"trust": 0.8,
"url": "http://www.kernel.org"
},
{
"title": "Bug 1744149",
"trust": 0.8,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-14816"
},
{
"title": "CVE-2019-14816",
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-14816"
},
{
"title": "Linux kernel Buffer error vulnerability fix",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=97659"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-122",
"trust": 1.0
},
{
"problemtype": "CWE-120",
"trust": 0.8
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 4.0,
"url": "http://www.openwall.com/lists/oss-security/2019/08/28/1"
},
{
"trust": 2.8,
"url": "https://access.redhat.com/security/cve/cve-2019-14816"
},
{
"trust": 2.3,
"url": "https://access.redhat.com/errata/rhsa-2020:0374"
},
{
"trust": 2.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14816"
},
{
"trust": 2.2,
"url": "https://usn.ubuntu.com/4157-1/"
},
{
"trust": 2.2,
"url": "https://access.redhat.com/errata/rhsa-2020:0339"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0661"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0653"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0375"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0664"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4163-2/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4162-1/"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0328"
},
{
"trust": 1.6,
"url": "http://packetstormsecurity.com/files/155212/slackware-security-advisory-slackware-14.2-kernel-updates.html"
},
{
"trust": 1.6,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00064.html"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2020/03/msg00001.html"
},
{
"trust": 1.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/o3rudqjxrjqvghcgr4yzwtq3ecbi7txh/"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0204"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2019/09/msg00025.html"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20191031-0005/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4163-1/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4162-2/"
},
{
"trust": 1.6,
"url": "http://packetstormsecurity.com/files/154951/kernel-live-patch-security-notice-lsn-0058-1.html"
},
{
"trust": 1.6,
"url": "https://github.com/torvalds/linux/commit/7caac62ed598a196d6ddf8d9c121e12e082cac3"
},
{
"trust": 1.6,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-14816"
},
{
"trust": 1.6,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00066.html"
},
{
"trust": 1.6,
"url": "https://seclists.org/bugtraq/2019/nov/11"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0174"
},
{
"trust": 1.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/t4jz6aeukfwbhqarogmqarj274pqp2qp/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4157-2/"
},
{
"trust": 1.4,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/o3rudqjxrjqvghcgr4yzwtq3ecbi7txh/"
},
{
"trust": 1.4,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/t4jz6aeukfwbhqarogmqarj274pqp2qp/"
},
{
"trust": 0.8,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14816"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2020:1347"
},
{
"trust": 0.6,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1744149"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/errata/rhsa-2020:1353"
},
{
"trust": 0.6,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7caac62ed598a196d6ddf8d9c121e12e082cac3a"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/errata/rhsa-2020:1266"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192984-1.html"
},
{
"trust": 0.6,
"url": "https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00237.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192658-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192651-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192953-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192952-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192951-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192950-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192949-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192948-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192947-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192946-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192424-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192414-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192412-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192648-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/156608/red-hat-security-advisory-2020-0664-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-buffer-overflow-via-net-wireless-marvell-mwifiex-30180"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3570/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1248/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0766/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.4346/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0415/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.4252/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/157140/red-hat-security-advisory-2020-1347-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3835/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/156020/red-hat-security-advisory-2020-0174-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3817/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0790/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/154897/ubuntu-security-notice-usn-4157-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/156216/red-hat-security-advisory-2020-0375-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1172/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3897/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3064/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.4346.2/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-17133"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14895"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17133"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-14895"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15505"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14815"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14821"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15902"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14898"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14901"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14901"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14898"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17666"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17666"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4162-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15117"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15918"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4162-2"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-21008"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15118"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20976"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20976"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.0.0-1019.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/5.0.0-1020.20"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4157-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.0.0-32.34"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15504"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2181"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/5.0.0-1024.25"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16714"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.0.0-1023.24"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14814"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.0.0-1020.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.0.0-1021.21"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20856"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20856"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "PACKETSTORM",
"id": "154948"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "156602"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "157140"
},
{
"db": "PACKETSTORM",
"id": "156608"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"db": "PACKETSTORM",
"id": "154948"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "156602"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "157140"
},
{
"db": "PACKETSTORM",
"id": "156608"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2019-09-25T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"date": "2019-10-23T18:28:53",
"db": "PACKETSTORM",
"id": "154948"
},
{
"date": "2020-02-05T18:37:11",
"db": "PACKETSTORM",
"id": "156213"
},
{
"date": "2020-03-03T14:09:01",
"db": "PACKETSTORM",
"id": "156603"
},
{
"date": "2019-10-17T15:18:45",
"db": "PACKETSTORM",
"id": "154897"
},
{
"date": "2020-03-03T14:08:50",
"db": "PACKETSTORM",
"id": "156602"
},
{
"date": "2020-02-05T18:49:35",
"db": "PACKETSTORM",
"id": "156216"
},
{
"date": "2020-04-07T16:41:32",
"db": "PACKETSTORM",
"id": "157140"
},
{
"date": "2020-03-03T16:33:49",
"db": "PACKETSTORM",
"id": "156608"
},
{
"date": "2019-08-28T00:00:00",
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"date": "2019-09-20T19:15:11.767000",
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2019-09-25T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2019-009588"
},
{
"date": "2023-03-23T00:00:00",
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"date": "2023-07-12T19:27:37.873000",
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Linux Kernel Vulnerable to classic buffer overflow",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2019-009588"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "overflow",
"sources": [
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "156602"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "157140"
},
{
"db": "PACKETSTORM",
"id": "156608"
}
],
"trust": 0.6
}
}
VAR-201909-0695
Vulnerability from variot - Updated: 2024-07-23 20:01A buffer overflow flaw was found, in versions from 2.6.34 to 5.2.x, in the way Linux kernel's vhost functionality that translates virtqueue buffers to IOVs, logged the buffer descriptors during migration. A privileged guest user able to pass descriptors with invalid length to the host when migration is underway, could use this flaw to increase their privileges on the host. This vulnerability stems from the incorrect verification of data boundaries when the network system or product performs operations on the memory, resulting in incorrect read and write operations to other associated memory locations. Attackers can exploit this vulnerability to cause buffer overflow or heap overflow, etc. 6.5) - x86_64
- (CVE-2019-14835)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. (CVE-2019-15031)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 19.04: linux-image-5.0.0-1016-aws 5.0.0-1016.18 linux-image-5.0.0-1017-gcp 5.0.0-1017.17 linux-image-5.0.0-1017-kvm 5.0.0-1017.18 linux-image-5.0.0-1017-raspi2 5.0.0-1017.17 linux-image-5.0.0-1020-azure 5.0.0-1020.21 linux-image-5.0.0-1021-snapdragon 5.0.0-1021.22 linux-image-5.0.0-29-generic 5.0.0-29.31 linux-image-5.0.0-29-generic-lpae 5.0.0-29.31 linux-image-5.0.0-29-lowlatency 5.0.0-29.31 linux-image-aws 5.0.0.1016.17 linux-image-azure 5.0.0.1020.19 linux-image-gcp 5.0.0.1017.43 linux-image-generic 5.0.0.29.30 linux-image-generic-lpae 5.0.0.29.30 linux-image-gke 5.0.0.1017.43 linux-image-kvm 5.0.0.1017.17 linux-image-lowlatency 5.0.0.29.30 linux-image-raspi2 5.0.0.1017.14 linux-image-snapdragon 5.0.0.1021.14 linux-image-virtual 5.0.0.29.30
Ubuntu 18.04 LTS: linux-image-4.15.0-1025-oracle 4.15.0-1025.28 linux-image-4.15.0-1044-gcp 4.15.0-1044.70 linux-image-4.15.0-1044-gke 4.15.0-1044.46 linux-image-4.15.0-1046-kvm 4.15.0-1046.46 linux-image-4.15.0-1047-raspi2 4.15.0-1047.51 linux-image-4.15.0-1050-aws 4.15.0-1050.52 linux-image-4.15.0-1056-oem 4.15.0-1056.65 linux-image-4.15.0-1064-snapdragon 4.15.0-1064.71 linux-image-4.15.0-64-generic 4.15.0-64.73 linux-image-4.15.0-64-generic-lpae 4.15.0-64.73 linux-image-4.15.0-64-lowlatency 4.15.0-64.73 linux-image-5.0.0-1017-gke 5.0.0-1017.17~18.04.1 linux-image-5.0.0-1020-azure 5.0.0-1020.21~18.04.1 linux-image-5.0.0-29-generic 5.0.0-29.31~18.04.1 linux-image-5.0.0-29-generic-lpae 5.0.0-29.31~18.04.1 linux-image-5.0.0-29-lowlatency 5.0.0-29.31~18.04.1 linux-image-aws 4.15.0.1050.49 linux-image-azure 5.0.0.1020.30 linux-image-gcp 4.15.0.1044.70 linux-image-generic 4.15.0.64.66 linux-image-generic-hwe-18.04 5.0.0.29.86 linux-image-generic-lpae 4.15.0.64.66 linux-image-generic-lpae-hwe-18.04 5.0.0.29.86 linux-image-gke 4.15.0.1044.47 linux-image-gke-4.15 4.15.0.1044.47 linux-image-gke-5.0 5.0.0.1017.7 linux-image-kvm 4.15.0.1046.46 linux-image-lowlatency 4.15.0.64.66 linux-image-lowlatency-hwe-18.04 5.0.0.29.86 linux-image-oem 4.15.0.1056.60 linux-image-oracle 4.15.0.1025.28 linux-image-powerpc-e500mc 4.15.0.64.66 linux-image-powerpc-smp 4.15.0.64.66 linux-image-powerpc64-emb 4.15.0.64.66 linux-image-powerpc64-smp 4.15.0.64.66 linux-image-raspi2 4.15.0.1047.45 linux-image-snapdragon 4.15.0.1064.67 linux-image-snapdragon-hwe-18.04 5.0.0.29.86 linux-image-virtual 4.15.0.64.66 linux-image-virtual-hwe-18.04 5.0.0.29.86
Ubuntu 16.04 LTS: linux-image-4.15.0-1025-oracle 4.15.0-1025.28~16.04.1 linux-image-4.15.0-1044-gcp 4.15.0-1044.46 linux-image-4.15.0-1050-aws 4.15.0-1050.52~16.04.1 linux-image-4.15.0-1059-azure 4.15.0-1059.64 linux-image-4.15.0-64-generic 4.15.0-64.73~16.04.1 linux-image-4.15.0-64-generic-lpae 4.15.0-64.73~16.04.1 linux-image-4.15.0-64-lowlatency 4.15.0-64.73~16.04.1 linux-image-4.4.0-1058-kvm 4.4.0-1058.65 linux-image-4.4.0-1094-aws 4.4.0-1094.105 linux-image-4.4.0-1122-raspi2 4.4.0-1122.131 linux-image-4.4.0-1126-snapdragon 4.4.0-1126.132 linux-image-4.4.0-164-generic 4.4.0-164.192 linux-image-4.4.0-164-generic-lpae 4.4.0-164.192 linux-image-4.4.0-164-lowlatency 4.4.0-164.192 linux-image-4.4.0-164-powerpc-e500mc 4.4.0-164.192 linux-image-4.4.0-164-powerpc-smp 4.4.0-164.192 linux-image-4.4.0-164-powerpc64-emb 4.4.0-164.192 linux-image-4.4.0-164-powerpc64-smp 4.4.0-164.192 linux-image-aws 4.4.0.1094.98 linux-image-aws-hwe 4.15.0.1050.50 linux-image-azure 4.15.0.1059.62 linux-image-gcp 4.15.0.1044.58 linux-image-generic 4.4.0.164.172 linux-image-generic-hwe-16.04 4.15.0.64.84 linux-image-generic-lpae 4.4.0.164.172 linux-image-generic-lpae-hwe-16.04 4.15.0.64.84 linux-image-gke 4.15.0.1044.58 linux-image-kvm 4.4.0.1058.58 linux-image-lowlatency 4.4.0.164.172 linux-image-lowlatency-hwe-16.04 4.15.0.64.84 linux-image-oem 4.15.0.64.84 linux-image-oracle 4.15.0.1025.18 linux-image-powerpc-e500mc 4.4.0.164.172 linux-image-powerpc-smp 4.4.0.164.172 linux-image-powerpc64-emb 4.4.0.164.172 linux-image-powerpc64-smp 4.4.0.164.172 linux-image-raspi2 4.4.0.1122.122 linux-image-snapdragon 4.4.0.1126.118 linux-image-virtual 4.4.0.164.172 linux-image-virtual-hwe-16.04 4.15.0.64.84
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. ========================================================================== Kernel Live Patch Security Notice 0058-1 October 22, 2019
linux vulnerability
A security issue affects these releases of Ubuntu:
| Series | Base kernel | Arch | flavors | |------------------+--------------+----------+------------------| | Ubuntu 18.04 LTS | 4.15.0 | amd64 | aws | | Ubuntu 18.04 LTS | 4.15.0 | amd64 | generic | | Ubuntu 18.04 LTS | 4.15.0 | amd64 | lowlatency | | Ubuntu 18.04 LTS | 4.15.0 | amd64 | oem | | Ubuntu 18.04 LTS | 5.0.0 | amd64 | azure | | Ubuntu 14.04 LTS | 4.4.0 | amd64 | generic | | Ubuntu 14.04 LTS | 4.4.0 | amd64 | lowlatency | | Ubuntu 16.04 LTS | 4.4.0 | amd64 | aws | | Ubuntu 16.04 LTS | 4.4.0 | amd64 | generic | | Ubuntu 16.04 LTS | 4.4.0 | amd64 | lowlatency | | Ubuntu 16.04 LTS | 4.15.0 | amd64 | azure | | Ubuntu 16.04 LTS | 4.15.0 | amd64 | generic | | Ubuntu 16.04 LTS | 4.15.0 | amd64 | lowlatency |
Summary:
Several security issues were fixed in the kernel.
Software Description: - linux: Linux kernel
Details:
It was discovered that a race condition existed in the GFS2 file system in the Linux kernel. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2016-10905)
It was discovered that a use-after-free error existed in the block layer subsystem of the Linux kernel when certain failure conditions occurred. A local attacker could possibly use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2018-20856)
It was discovered that the USB gadget Midi driver in the Linux kernel contained a double-free vulnerability when handling certain error conditions. A local attacker could use this to cause a denial of service (system crash). (CVE-2018-20961)
It was discovered that the XFS file system in the Linux kernel did not properly handle mount failures in some situations. A local attacker could possibly use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2018-20976)
It was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not did not handle detach operations correctly, leading to a use-after-free vulnerability. A physically proximate attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2018-21008)
It was discovered that the Intel Wi-Fi device driver in the Linux kernel did not properly validate certain Tunneled Direct Link Setup (TDLS). A physically proximate attacker could use this to cause a denial of service (Wi-Fi disconnect). (CVE-2019-0136)
It was discovered that the Linux kernel on ARM processors allowed a tracing process to modify a syscall after a seccomp decision had been made on that syscall. A local attacker could possibly use this to bypass seccomp restrictions. (CVE-2019-2054)
It was discovered that an integer overflow existed in the Binder implementation of the Linux kernel, leading to a buffer overflow. A local attacker could use this to escalate privileges. (CVE-2019-2181)
It was discovered that the Marvell Wireless LAN device driver in the Linux kernel did not properly validate the BSS descriptor. A local attacker could possibly use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-3846)
It was discovered that a heap buffer overflow existed in the Marvell Wireless LAN device driver for the Linux kernel. An attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-10126)
It was discovered that the Bluetooth UART implementation in the Linux kernel did not properly check for missing tty operations. A local attacker could use this to cause a denial of service. (CVE-2019-10207)
Jonathan Looney discovered that an integer overflow existed in the Linux kernel when handling TCP Selective Acknowledgments (SACKs). A remote attacker could use this to cause a denial of service (system crash). (CVE-2019-11477)
Jonathan Looney discovered that the TCP retransmission queue implementation in the Linux kernel could be fragmented when handling certain TCP Selective Acknowledgment (SACK) sequences. A remote attacker could use this to cause a denial of service. (CVE-2019-11478)
It was discovered that the ext4 file system implementation in the Linux kernel did not properly zero out memory in some situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2019-11833)
It was discovered that the PowerPC dlpar implementation in the Linux kernel did not properly check for allocation errors in some situations. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2019-12614)
It was discovered that the floppy driver in the Linux kernel did not properly validate meta data, leading to a buffer overread. A local attacker could use this to cause a denial of service (system crash). (CVE-2019-14283)
It was discovered that the floppy driver in the Linux kernel did not properly validate ioctl() calls, leading to a division-by-zero. A local attacker could use this to cause a denial of service (system crash). (CVE-2019-14284)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-14814)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-14815)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-14816)
Matt Delco discovered that the KVM hypervisor implementation in the Linux kernel did not properly perform bounds checking when handling coalesced MMIO write operations. A local attacker with write access to /dev/kvm could use this to cause a denial of service (system crash). (CVE-2019-14821)
Peter Pi discovered a buffer overflow in the virtio network backend (vhost_net) implementation in the Linux kernel. (CVE-2019-14835)
Update instructions:
The problem can be corrected by updating your livepatches to the following versions:
| Kernel | Version | flavors | |--------------------------+----------+--------------------------| | 4.4.0-148.174 | 58.1 | lowlatency, generic | | 4.4.0-148.174~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-150.176 | 58.1 | generic, lowlatency | | 4.4.0-150.176~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-151.178 | 58.1 | lowlatency, generic | | 4.4.0-151.178~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-154.181 | 58.1 | lowlatency, generic | | 4.4.0-154.181~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-157.185 | 58.1 | lowlatency, generic | | 4.4.0-157.185~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-159.187 | 58.1 | lowlatency, generic | | 4.4.0-159.187~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-161.189 | 58.1 | lowlatency, generic | | 4.4.0-161.189~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-164.192 | 58.1 | lowlatency, generic | | 4.4.0-164.192~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-165.193 | 58.1 | generic, lowlatency | | 4.4.0-1083.93 | 58.1 | aws | | 4.4.0-1084.94 | 58.1 | aws | | 4.4.0-1085.96 | 58.1 | aws | | 4.4.0-1087.98 | 58.1 | aws | | 4.4.0-1088.99 | 58.1 | aws | | 4.4.0-1090.101 | 58.1 | aws | | 4.4.0-1092.103 | 58.1 | aws | | 4.4.0-1094.105 | 58.1 | aws | | 4.15.0-50.54 | 58.1 | generic, lowlatency | | 4.15.0-50.54~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-51.55 | 58.1 | generic, lowlatency | | 4.15.0-51.55~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-52.56 | 58.1 | lowlatency, generic | | 4.15.0-52.56~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-54.58 | 58.1 | generic, lowlatency | | 4.15.0-54.58~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-55.60 | 58.1 | generic, lowlatency | | 4.15.0-58.64 | 58.1 | generic, lowlatency | | 4.15.0-58.64~16.04.1 | 58.1 | lowlatency, generic | | 4.15.0-60.67 | 58.1 | lowlatency, generic | | 4.15.0-60.67~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-62.69 | 58.1 | generic, lowlatency | | 4.15.0-62.69~16.04.1 | 58.1 | lowlatency, generic | | 4.15.0-64.73 | 58.1 | generic, lowlatency | | 4.15.0-64.73~16.04.1 | 58.1 | lowlatency, generic | | 4.15.0-65.74 | 58.1 | lowlatency, generic | | 4.15.0-1038.43 | 58.1 | oem | | 4.15.0-1039.41 | 58.1 | aws | | 4.15.0-1039.44 | 58.1 | oem | | 4.15.0-1040.42 | 58.1 | aws | | 4.15.0-1041.43 | 58.1 | aws | | 4.15.0-1043.45 | 58.1 | aws | | 4.15.0-1043.48 | 58.1 | oem | | 4.15.0-1044.46 | 58.1 | aws | | 4.15.0-1045.47 | 58.1 | aws | | 4.15.0-1045.50 | 58.1 | oem | | 4.15.0-1047.49 | 58.1 | aws | | 4.15.0-1047.51 | 58.1 | azure | | 4.15.0-1048.50 | 58.1 | aws | | 4.15.0-1049.54 | 58.1 | azure | | 4.15.0-1050.52 | 58.1 | aws | | 4.15.0-1050.55 | 58.1 | azure | | 4.15.0-1050.57 | 58.1 | oem | | 4.15.0-1051.53 | 58.1 | aws | | 4.15.0-1051.56 | 58.1 | azure | | 4.15.0-1052.57 | 58.1 | azure | | 4.15.0-1055.60 | 58.1 | azure | | 4.15.0-1056.61 | 58.1 | azure | | 4.15.0-1056.65 | 58.1 | oem | | 4.15.0-1057.62 | 58.1 | azure | | 4.15.0-1057.66 | 58.1 | oem | | 4.15.0-1059.64 | 58.1 | azure | | 5.0.0-1014.14~18.04.1 | 58.1 | azure | | 5.0.0-1016.17~18.04.1 | 58.1 | azure | | 5.0.0-1018.19~18.04.1 | 58.1 | azure | | 5.0.0-1020.21~18.04.1 | 58.1 | azure |
Support Information:
Kernels older than the levels listed below do not receive livepatch updates. Please upgrade your kernel as soon as possible.
| Series | Version | Flavors | |------------------+------------------+--------------------------| | Ubuntu 18.04 LTS | 4.15.0-1039 | aws | | Ubuntu 16.04 LTS | 4.4.0-1083 | aws | | Ubuntu 18.04 LTS | 5.0.0-1000 | azure | | Ubuntu 16.04 LTS | 4.15.0-1047 | azure | | Ubuntu 18.04 LTS | 4.15.0-50 | generic lowlatency | | Ubuntu 16.04 LTS | 4.15.0-50 | generic lowlatency | | Ubuntu 14.04 LTS | 4.4.0-148 | generic lowlatency | | Ubuntu 18.04 LTS | 4.15.0-1038 | oem | | Ubuntu 16.04 LTS | 4.4.0-148 | generic lowlatency |
References: CVE-2016-10905, CVE-2018-20856, CVE-2018-20961, CVE-2018-20976, CVE-2018-21008, CVE-2019-0136, CVE-2019-2054, CVE-2019-2181, CVE-2019-3846, CVE-2019-10126, CVE-2019-10207, CVE-2019-11477, CVE-2019-11478, CVE-2019-11833, CVE-2019-12614, CVE-2019-14283, CVE-2019-14284, CVE-2019-14814, CVE-2019-14815, CVE-2019-14816, CVE-2019-14821, CVE-2019-14835
-- ubuntu-security-announce mailing list ubuntu-security-announce@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce .
Here are the details from the Slackware 14.2 ChangeLog: +--------------------------+ patches/packages/linux-4.4.199/: Upgraded. These updates fix various bugs and security issues. If you use lilo to boot your machine, be sure lilo.conf points to the correct kernel and initrd and run lilo as root to update the bootloader. If you use elilo to boot your machine, you should run eliloconfig to copy the kernel and initrd to the EFI System Partition. For more information, see: Fixed in 4.4.191: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3900 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15118 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10906 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10905 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15117 Fixed in 4.4.193: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14835 Fixed in 4.4.194: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14816 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14814 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15505 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14821 Fixed in 4.4.195: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17053 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17052 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17056 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17055 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17054 Fixed in 4.4.196: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2215 Fixed in 4.4.197: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16746 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20976 Fixed in 4.4.198: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17075 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17133 Fixed in 4.4.199: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15098 ( Security fix *) +--------------------------+
Where to find the new packages: +-----------------------------+
Thanks to the friendly folks at the OSU Open Source Lab (http://osuosl.org) for donating FTP and rsync hosting to the Slackware project! :-)
Also see the "Get Slack" section on http://slackware.com for additional mirror sites near you.
Updated packages for Slackware 14.2: ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-i586-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-smp-4.4.199_smp-i686-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199_smp-x86-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-i586-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-smp-4.4.199_smp-i686-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-i586-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-smp-4.4.199_smp-i686-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199_smp-noarch-1.txz
Updated packages for Slackware x86_64 14.2: ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-x86_64-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199-x86-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-x86_64-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-x86_64-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199-noarch-1.txz
MD5 signatures: +-------------+
Slackware 14.2 packages:
0e523f42e759ecc2399f36e37672f110 kernel-generic-4.4.199-i586-1.txz ee6451f5362008b46fee2e08e3077b21 kernel-generic-smp-4.4.199_smp-i686-1.txz a8338ef88f2e3ea9c74d564c36ccd420 kernel-headers-4.4.199_smp-x86-1.txz cd9e9c241e4eec2fba1dae658a28870e kernel-huge-4.4.199-i586-1.txz 842030890a424023817d42a83a86a7f4 kernel-huge-smp-4.4.199_smp-i686-1.txz 257db024bb4501548ac9118dbd2d9ae6 kernel-modules-4.4.199-i586-1.txz 96377cbaf7bca55aaca70358c63151a7 kernel-modules-smp-4.4.199_smp-i686-1.txz 0673e86466f9e624964d95107cf6712f kernel-source-4.4.199_smp-noarch-1.txz
Slackware x86_64 14.2 packages: 6d1ff428e7cad6caa8860acc402447a1 kernel-generic-4.4.199-x86_64-1.txz dadc091dc725b8227e0d1e35098d6416 kernel-headers-4.4.199-x86-1.txz f5f4c034203f44dd1513ad3504c42515 kernel-huge-4.4.199-x86_64-1.txz a5337cd8b2ca80d4d93b9e9688e42b03 kernel-modules-4.4.199-x86_64-1.txz 5dd6e46c04f37b97062dc9e52cc38add kernel-source-4.4.199-noarch-1.txz
Installation instructions: +------------------------+
Upgrade the packages as root:
upgradepkg kernel-*.txz
If you are using an initrd, you'll need to rebuild it.
For a 32-bit SMP machine, use this command (substitute the appropriate kernel version if you are not running Slackware 14.2):
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199-smp | bash
For a 64-bit machine, or a 32-bit uniprocessor machine, use this command (substitute the appropriate kernel version if you are not running Slackware 14.2):
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199 | bash
Please note that "uniprocessor" has to do with the kernel you are running, not with the CPU. Most systems should run the SMP kernel (if they can) regardless of the number of cores the CPU has. If you aren't sure which kernel you are running, run "uname -a". If you see SMP there, you are running the SMP kernel and should use the 4.4.199-smp version when running mkinitrd_command_generator. Note that this is only for 32-bit -- 64-bit systems should always use 4.4.199 as the version.
If you are using lilo or elilo to boot the machine, you'll need to ensure that the machine is properly prepared before rebooting.
If using LILO: By default, lilo.conf contains an image= line that references a symlink that always points to the correct kernel. No editing should be required unless your machine uses a custom lilo.conf. If that is the case, be sure that the image= line references the correct kernel file. Either way, you'll need to run "lilo" as root to reinstall the boot loader.
If using elilo: Ensure that the /boot/vmlinuz symlink is pointing to the kernel you wish to use, and then run eliloconfig to update the EFI System Partition.
+-----+
Slackware Linux Security Team http://slackware.com/gpg-key security@slackware.com
+------------------------------------------------------------------------+ | To leave the slackware-security mailing list: | +------------------------------------------------------------------------+ | Send an email to majordomo@slackware.com with this text in the body of | | the email message: | | | | unsubscribe slackware-security | | | | You will get a confirmation message back containing instructions to | | complete the process. Please do not reply to this email address. 7) - aarch64, noarch, ppc64le
- These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The following packages have been upgraded to a later upstream version: redhat-release-virtualization-host (4.2), redhat-virtualization-host (4.2). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: kernel security update Advisory ID: RHSA-2019:2867-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2019:2867 Issue date: 2019-09-23 CVE Names: CVE-2019-14835 ==================================================================== 1. Summary:
An update for kernel is now available for Red Hat Enterprise Linux 7.4 Advanced Update Support, Red Hat Enterprise Linux 7.4 Telco Extended Update Support, and Red Hat Enterprise Linux 7.4 Update Services for SAP Solutions.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.4) - noarch, x86_64 Red Hat Enterprise Linux Server E4S (v. 7.4) - noarch, ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.4) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.4) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.4) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.4) - noarch, x86_64
-
(CVE-2019-14835)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1750727 - CVE-2019-14835 kernel: vhost-net: guest to host kernel escape during migration
- Package List:
Red Hat Enterprise Linux Server AUS (v. 7.4):
Source: kernel-3.10.0-693.59.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-693.59.1.el7.noarch.rpm kernel-doc-3.10.0-693.59.1.el7.noarch.rpm
x86_64: kernel-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm kernel-devel-3.10.0-693.59.1.el7.x86_64.rpm kernel-headers-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-693.59.1.el7.x86_64.rpm perf-3.10.0-693.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm python-perf-3.10.0-693.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.4):
Source: kernel-3.10.0-693.59.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-693.59.1.el7.noarch.rpm kernel-doc-3.10.0-693.59.1.el7.noarch.rpm
ppc64le: kernel-3.10.0-693.59.1.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debug-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-693.59.1.el7.ppc64le.rpm kernel-devel-3.10.0-693.59.1.el7.ppc64le.rpm kernel-headers-3.10.0-693.59.1.el7.ppc64le.rpm kernel-tools-3.10.0-693.59.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm kernel-tools-libs-3.10.0-693.59.1.el7.ppc64le.rpm perf-3.10.0-693.59.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm python-perf-3.10.0-693.59.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm
x86_64: kernel-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm kernel-devel-3.10.0-693.59.1.el7.x86_64.rpm kernel-headers-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-693.59.1.el7.x86_64.rpm perf-3.10.0-693.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm python-perf-3.10.0-693.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.4):
Source: kernel-3.10.0-693.59.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-693.59.1.el7.noarch.rpm kernel-doc-3.10.0-693.59.1.el7.noarch.rpm
x86_64: kernel-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm kernel-devel-3.10.0-693.59.1.el7.x86_64.rpm kernel-headers-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-693.59.1.el7.x86_64.rpm perf-3.10.0-693.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm python-perf-3.10.0-693.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.4):
x86_64: kernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-693.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.4):
ppc64le: kernel-debug-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debug-devel-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-693.59.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-693.59.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm
x86_64: kernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-693.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. 7.4):
x86_64: kernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-693.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-14835 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/security/vulnerabilities/kernel-vhost
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2019 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXYi86tzjgjWX9erEAQi1YA//SFbLNTK8xgMJM4UXWUDM7d1LMS/f79kZ C+qG3/+OUqyltICprPUpLQOslAaLgY1C1slkqkxXXgNtfB4rBnE2gkDtSnqe0a7K 60JapVVCpXuJnGozoFusUPgSvy7DM3RJ+xcz4SS6fNIuRGq1bDjOd4E3tcQwk1rh hqqPwPmY1uJK2y6UWiyF9Q2Dvug5mO2ZKqxgwuZPQyRXg14BVxDfXIY6+8SuuL3j YLVmCNqpErcoqeIaUAXUbGyATrUwni8J1RY4Q5lNDMq8u31Xlu/CSaEqU5OxtkeB RvfAuFebnJkUo+YkzouWWKF+eTEhpqxXnPZq2KM8zStGmRpVmreg16THa6CSpi81 HDcsTNKFHsA3QWHosxramg1Z/RYSD+goD0lvnfEVvO4jaUW2f59q2PEu4riM897f L21luddDijVBYmzAngkrhKR0kiADLmU+p67nZ9EyNrR95XYaaJ6GRb5vSXBEgiXt vJJn8GuJAI5h13MzO5rAua82xTHxiQc28s5KCbCJFcp3DZo72mDIVE6MfGb1By5e T1IN17xPN7yVoy/c8WZ89unoNZNXMsTBhcWNNlXsXknALVaYi5i4AtWx3mOLP2gk c1Z7F9XG8niCTGSLvBs5gSAfjSFNB6YiU6NzX9Zlxb4/hr3HE20S+w3AGnvD2f05 hz6LDDCc4xc=hk2d -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-201909-0695",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux workstation",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.9.193"
},
{
"model": "aff a700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "30"
},
{
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.2"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.5"
},
{
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.1rc1.b060"
},
{
"model": "openshift container platform",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "3.11"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"model": "imanager neteco",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r009c10spc200"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "8.0"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "12.04"
},
{
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.0"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.4"
},
{
"model": "enterprise linux desktop",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.9"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.2"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.144"
},
{
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.rc2.b050"
},
{
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "5.3"
},
{
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "3.16.74"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"model": "enterprise linux desktop",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"model": "imanager neteco 6000",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r008c10spc300"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "imanager neteco",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r009c00"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.04"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.4"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.6"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.14"
},
{
"model": "data availability services",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.19"
},
{
"model": "virtualization host",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.2"
},
{
"model": "service processor",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.1rc1.b080"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.4.193"
},
{
"model": "imanager neteco 6000",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r008c20"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.5"
},
{
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.0.spc100.b210"
},
{
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.0"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.2.15"
},
{
"model": "enterprise linux workstation",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "2.6.34"
},
{
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.4"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.73"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "29"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:5.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.2.15",
"versionStartIncluding": "5.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.19.73",
"versionStartIncluding": "4.19",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.16.74",
"versionStartIncluding": "2.6.34",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.14.144",
"versionStartIncluding": "4.14",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.4.193",
"versionStartIncluding": "4.4",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.9.193",
"versionStartIncluding": "4.9",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:19.04:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:12.04:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:29:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:aff_a700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:aff_a700s:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610s:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:*:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:service_processor:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:data_availability_services:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:_steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_desktop:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_workstation:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:6.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:6.5:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_desktop:6.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:6.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_workstation:6.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:7.5:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:redhat:openshift_container_platform:3.11:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:7.6:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:7.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:redhat:virtualization:4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:redhat:virtualization_host:4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:huawei:manageone:6.5.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:imanager_neteco_6000:v600r008c10spc300:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:imanager_neteco_6000:v600r008c20:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:imanager_neteco:v600r009c00:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:imanager_neteco:v600r009c10spc200:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:manageone:6.5.0.spc100.b210:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:manageone:6.5.1rc1.b060:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:manageone:6.5.1rc1.b080:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:huawei:manageone:6.5.rc2.b050:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154562"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154570"
},
{
"db": "PACKETSTORM",
"id": "154540"
}
],
"trust": 0.5
},
"cve": "CVE-2019-14835",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "HIGH",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "VHN-146821",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "HIGH",
"attackVector": "LOCAL",
"author": "secalert@redhat.com",
"availabilityImpact": "HIGH",
"baseScore": 7.2,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 0.6,
"impactScore": 6.0,
"integrityImpact": "HIGH",
"privilegesRequired": "HIGH",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.0/AV:L/AC:H/PR:H/UI:R/S:C/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2019-14835",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "secalert@redhat.com",
"id": "CVE-2019-14835",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-146821",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "A buffer overflow flaw was found, in versions from 2.6.34 to 5.2.x, in the way Linux kernel\u0027s vhost functionality that translates virtqueue buffers to IOVs, logged the buffer descriptors during migration. A privileged guest user able to pass descriptors with invalid length to the host when migration is underway, could use this flaw to increase their privileges on the host. This vulnerability stems from the incorrect verification of data boundaries when the network system or product performs operations on the memory, resulting in incorrect read and write operations to other associated memory locations. Attackers can exploit this vulnerability to cause buffer overflow or heap overflow, etc. 6.5) - x86_64\n\n3. \n(CVE-2019-14835)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. (CVE-2019-15031)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 19.04:\n linux-image-5.0.0-1016-aws 5.0.0-1016.18\n linux-image-5.0.0-1017-gcp 5.0.0-1017.17\n linux-image-5.0.0-1017-kvm 5.0.0-1017.18\n linux-image-5.0.0-1017-raspi2 5.0.0-1017.17\n linux-image-5.0.0-1020-azure 5.0.0-1020.21\n linux-image-5.0.0-1021-snapdragon 5.0.0-1021.22\n linux-image-5.0.0-29-generic 5.0.0-29.31\n linux-image-5.0.0-29-generic-lpae 5.0.0-29.31\n linux-image-5.0.0-29-lowlatency 5.0.0-29.31\n linux-image-aws 5.0.0.1016.17\n linux-image-azure 5.0.0.1020.19\n linux-image-gcp 5.0.0.1017.43\n linux-image-generic 5.0.0.29.30\n linux-image-generic-lpae 5.0.0.29.30\n linux-image-gke 5.0.0.1017.43\n linux-image-kvm 5.0.0.1017.17\n linux-image-lowlatency 5.0.0.29.30\n linux-image-raspi2 5.0.0.1017.14\n linux-image-snapdragon 5.0.0.1021.14\n linux-image-virtual 5.0.0.29.30\n\nUbuntu 18.04 LTS:\n linux-image-4.15.0-1025-oracle 4.15.0-1025.28\n linux-image-4.15.0-1044-gcp 4.15.0-1044.70\n linux-image-4.15.0-1044-gke 4.15.0-1044.46\n linux-image-4.15.0-1046-kvm 4.15.0-1046.46\n linux-image-4.15.0-1047-raspi2 4.15.0-1047.51\n linux-image-4.15.0-1050-aws 4.15.0-1050.52\n linux-image-4.15.0-1056-oem 4.15.0-1056.65\n linux-image-4.15.0-1064-snapdragon 4.15.0-1064.71\n linux-image-4.15.0-64-generic 4.15.0-64.73\n linux-image-4.15.0-64-generic-lpae 4.15.0-64.73\n linux-image-4.15.0-64-lowlatency 4.15.0-64.73\n linux-image-5.0.0-1017-gke 5.0.0-1017.17~18.04.1\n linux-image-5.0.0-1020-azure 5.0.0-1020.21~18.04.1\n linux-image-5.0.0-29-generic 5.0.0-29.31~18.04.1\n linux-image-5.0.0-29-generic-lpae 5.0.0-29.31~18.04.1\n linux-image-5.0.0-29-lowlatency 5.0.0-29.31~18.04.1\n linux-image-aws 4.15.0.1050.49\n linux-image-azure 5.0.0.1020.30\n linux-image-gcp 4.15.0.1044.70\n linux-image-generic 4.15.0.64.66\n linux-image-generic-hwe-18.04 5.0.0.29.86\n linux-image-generic-lpae 4.15.0.64.66\n linux-image-generic-lpae-hwe-18.04 5.0.0.29.86\n linux-image-gke 4.15.0.1044.47\n linux-image-gke-4.15 4.15.0.1044.47\n linux-image-gke-5.0 5.0.0.1017.7\n linux-image-kvm 4.15.0.1046.46\n linux-image-lowlatency 4.15.0.64.66\n linux-image-lowlatency-hwe-18.04 5.0.0.29.86\n linux-image-oem 4.15.0.1056.60\n linux-image-oracle 4.15.0.1025.28\n linux-image-powerpc-e500mc 4.15.0.64.66\n linux-image-powerpc-smp 4.15.0.64.66\n linux-image-powerpc64-emb 4.15.0.64.66\n linux-image-powerpc64-smp 4.15.0.64.66\n linux-image-raspi2 4.15.0.1047.45\n linux-image-snapdragon 4.15.0.1064.67\n linux-image-snapdragon-hwe-18.04 5.0.0.29.86\n linux-image-virtual 4.15.0.64.66\n linux-image-virtual-hwe-18.04 5.0.0.29.86\n\nUbuntu 16.04 LTS:\n linux-image-4.15.0-1025-oracle 4.15.0-1025.28~16.04.1\n linux-image-4.15.0-1044-gcp 4.15.0-1044.46\n linux-image-4.15.0-1050-aws 4.15.0-1050.52~16.04.1\n linux-image-4.15.0-1059-azure 4.15.0-1059.64\n linux-image-4.15.0-64-generic 4.15.0-64.73~16.04.1\n linux-image-4.15.0-64-generic-lpae 4.15.0-64.73~16.04.1\n linux-image-4.15.0-64-lowlatency 4.15.0-64.73~16.04.1\n linux-image-4.4.0-1058-kvm 4.4.0-1058.65\n linux-image-4.4.0-1094-aws 4.4.0-1094.105\n linux-image-4.4.0-1122-raspi2 4.4.0-1122.131\n linux-image-4.4.0-1126-snapdragon 4.4.0-1126.132\n linux-image-4.4.0-164-generic 4.4.0-164.192\n linux-image-4.4.0-164-generic-lpae 4.4.0-164.192\n linux-image-4.4.0-164-lowlatency 4.4.0-164.192\n linux-image-4.4.0-164-powerpc-e500mc 4.4.0-164.192\n linux-image-4.4.0-164-powerpc-smp 4.4.0-164.192\n linux-image-4.4.0-164-powerpc64-emb 4.4.0-164.192\n linux-image-4.4.0-164-powerpc64-smp 4.4.0-164.192\n linux-image-aws 4.4.0.1094.98\n linux-image-aws-hwe 4.15.0.1050.50\n linux-image-azure 4.15.0.1059.62\n linux-image-gcp 4.15.0.1044.58\n linux-image-generic 4.4.0.164.172\n linux-image-generic-hwe-16.04 4.15.0.64.84\n linux-image-generic-lpae 4.4.0.164.172\n linux-image-generic-lpae-hwe-16.04 4.15.0.64.84\n linux-image-gke 4.15.0.1044.58\n linux-image-kvm 4.4.0.1058.58\n linux-image-lowlatency 4.4.0.164.172\n linux-image-lowlatency-hwe-16.04 4.15.0.64.84\n linux-image-oem 4.15.0.64.84\n linux-image-oracle 4.15.0.1025.18\n linux-image-powerpc-e500mc 4.4.0.164.172\n linux-image-powerpc-smp 4.4.0.164.172\n linux-image-powerpc64-emb 4.4.0.164.172\n linux-image-powerpc64-smp 4.4.0.164.172\n linux-image-raspi2 4.4.0.1122.122\n linux-image-snapdragon 4.4.0.1126.118\n linux-image-virtual 4.4.0.164.172\n linux-image-virtual-hwe-16.04 4.15.0.64.84\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. ==========================================================================\nKernel Live Patch Security Notice 0058-1\nOctober 22, 2019\n\nlinux vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu:\n\n| Series | Base kernel | Arch | flavors |\n|------------------+--------------+----------+------------------|\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | aws |\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | generic |\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | lowlatency |\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | oem |\n| Ubuntu 18.04 LTS | 5.0.0 | amd64 | azure |\n| Ubuntu 14.04 LTS | 4.4.0 | amd64 | generic |\n| Ubuntu 14.04 LTS | 4.4.0 | amd64 | lowlatency |\n| Ubuntu 16.04 LTS | 4.4.0 | amd64 | aws |\n| Ubuntu 16.04 LTS | 4.4.0 | amd64 | generic |\n| Ubuntu 16.04 LTS | 4.4.0 | amd64 | lowlatency |\n| Ubuntu 16.04 LTS | 4.15.0 | amd64 | azure |\n| Ubuntu 16.04 LTS | 4.15.0 | amd64 | generic |\n| Ubuntu 16.04 LTS | 4.15.0 | amd64 | lowlatency |\n\nSummary:\n\nSeveral security issues were fixed in the kernel. \n\nSoftware Description:\n- linux: Linux kernel\n\nDetails:\n\nIt was discovered that a race condition existed in the GFS2 file system in\nthe Linux kernel. A local attacker could possibly use this to cause a\ndenial of service (system crash). (CVE-2016-10905)\n\nIt was discovered that a use-after-free error existed in the block layer\nsubsystem of the Linux kernel when certain failure conditions occurred. A\nlocal attacker could possibly use this to cause a denial of service (system\ncrash) or possibly execute arbitrary code. (CVE-2018-20856)\n\nIt was discovered that the USB gadget Midi driver in the Linux kernel\ncontained a double-free vulnerability when handling certain error\nconditions. A local attacker could use this to cause a denial of service\n(system crash). (CVE-2018-20961)\n\nIt was discovered that the XFS file system in the Linux kernel did not\nproperly handle mount failures in some situations. A local attacker could\npossibly use this to cause a denial of service (system crash) or execute\narbitrary code. (CVE-2018-20976)\n\nIt was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not\ndid not handle detach operations correctly, leading to a use-after-free\nvulnerability. A physically proximate attacker could use this to cause a\ndenial of service (system crash) or possibly execute arbitrary code. \n(CVE-2018-21008)\n\nIt was discovered that the Intel Wi-Fi device driver in the Linux kernel\ndid not properly validate certain Tunneled Direct Link Setup (TDLS). A\nphysically proximate attacker could use this to cause a denial of service\n(Wi-Fi disconnect). (CVE-2019-0136)\n\nIt was discovered that the Linux kernel on ARM processors allowed a tracing\nprocess to modify a syscall after a seccomp decision had been made on that\nsyscall. A local attacker could possibly use this to bypass seccomp\nrestrictions. (CVE-2019-2054)\n\nIt was discovered that an integer overflow existed in the Binder\nimplementation of the Linux kernel, leading to a buffer overflow. A local\nattacker could use this to escalate privileges. (CVE-2019-2181)\n\nIt was discovered that the Marvell Wireless LAN device driver in the Linux\nkernel did not properly validate the BSS descriptor. A local attacker could\npossibly use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2019-3846)\n\nIt was discovered that a heap buffer overflow existed in the Marvell\nWireless LAN device driver for the Linux kernel. An attacker could use this\nto cause a denial of service (system crash) or possibly execute arbitrary\ncode. (CVE-2019-10126)\n\nIt was discovered that the Bluetooth UART implementation in the Linux\nkernel did not properly check for missing tty operations. A local attacker\ncould use this to cause a denial of service. (CVE-2019-10207)\n\nJonathan Looney discovered that an integer overflow existed in the Linux\nkernel when handling TCP Selective Acknowledgments (SACKs). A remote\nattacker could use this to cause a denial of service (system crash). \n(CVE-2019-11477)\n\nJonathan Looney discovered that the TCP retransmission queue implementation\nin the Linux kernel could be fragmented when handling certain TCP Selective\nAcknowledgment (SACK) sequences. A remote attacker could use this to cause\na denial of service. (CVE-2019-11478)\n\nIt was discovered that the ext4 file system implementation in the Linux\nkernel did not properly zero out memory in some situations. A local\nattacker could use this to expose sensitive information (kernel memory). \n(CVE-2019-11833)\n\nIt was discovered that the PowerPC dlpar implementation in the Linux kernel\ndid not properly check for allocation errors in some situations. A local\nattacker could possibly use this to cause a denial of service (system\ncrash). (CVE-2019-12614)\n\nIt was discovered that the floppy driver in the Linux kernel did not\nproperly validate meta data, leading to a buffer overread. A local attacker\ncould use this to cause a denial of service (system crash). \n(CVE-2019-14283)\n\nIt was discovered that the floppy driver in the Linux kernel did not\nproperly validate ioctl() calls, leading to a division-by-zero. A local\nattacker could use this to cause a denial of service (system crash). \n(CVE-2019-14284)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2019-14814)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2019-14815)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2019-14816)\n\nMatt Delco discovered that the KVM hypervisor implementation in the Linux\nkernel did not properly perform bounds checking when handling coalesced\nMMIO write operations. A local attacker with write access to /dev/kvm could\nuse this to cause a denial of service (system crash). (CVE-2019-14821)\n\nPeter Pi discovered a buffer overflow in the virtio network backend\n(vhost_net) implementation in the Linux kernel. (CVE-2019-14835)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your livepatches to the following\nversions:\n\n| Kernel | Version | flavors |\n|--------------------------+----------+--------------------------|\n| 4.4.0-148.174 | 58.1 | lowlatency, generic |\n| 4.4.0-148.174~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-150.176 | 58.1 | generic, lowlatency |\n| 4.4.0-150.176~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-151.178 | 58.1 | lowlatency, generic |\n| 4.4.0-151.178~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-154.181 | 58.1 | lowlatency, generic |\n| 4.4.0-154.181~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-157.185 | 58.1 | lowlatency, generic |\n| 4.4.0-157.185~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-159.187 | 58.1 | lowlatency, generic |\n| 4.4.0-159.187~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-161.189 | 58.1 | lowlatency, generic |\n| 4.4.0-161.189~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-164.192 | 58.1 | lowlatency, generic |\n| 4.4.0-164.192~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-165.193 | 58.1 | generic, lowlatency |\n| 4.4.0-1083.93 | 58.1 | aws |\n| 4.4.0-1084.94 | 58.1 | aws |\n| 4.4.0-1085.96 | 58.1 | aws |\n| 4.4.0-1087.98 | 58.1 | aws |\n| 4.4.0-1088.99 | 58.1 | aws |\n| 4.4.0-1090.101 | 58.1 | aws |\n| 4.4.0-1092.103 | 58.1 | aws |\n| 4.4.0-1094.105 | 58.1 | aws |\n| 4.15.0-50.54 | 58.1 | generic, lowlatency |\n| 4.15.0-50.54~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-51.55 | 58.1 | generic, lowlatency |\n| 4.15.0-51.55~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-52.56 | 58.1 | lowlatency, generic |\n| 4.15.0-52.56~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-54.58 | 58.1 | generic, lowlatency |\n| 4.15.0-54.58~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-55.60 | 58.1 | generic, lowlatency |\n| 4.15.0-58.64 | 58.1 | generic, lowlatency |\n| 4.15.0-58.64~16.04.1 | 58.1 | lowlatency, generic |\n| 4.15.0-60.67 | 58.1 | lowlatency, generic |\n| 4.15.0-60.67~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-62.69 | 58.1 | generic, lowlatency |\n| 4.15.0-62.69~16.04.1 | 58.1 | lowlatency, generic |\n| 4.15.0-64.73 | 58.1 | generic, lowlatency |\n| 4.15.0-64.73~16.04.1 | 58.1 | lowlatency, generic |\n| 4.15.0-65.74 | 58.1 | lowlatency, generic |\n| 4.15.0-1038.43 | 58.1 | oem |\n| 4.15.0-1039.41 | 58.1 | aws |\n| 4.15.0-1039.44 | 58.1 | oem |\n| 4.15.0-1040.42 | 58.1 | aws |\n| 4.15.0-1041.43 | 58.1 | aws |\n| 4.15.0-1043.45 | 58.1 | aws |\n| 4.15.0-1043.48 | 58.1 | oem |\n| 4.15.0-1044.46 | 58.1 | aws |\n| 4.15.0-1045.47 | 58.1 | aws |\n| 4.15.0-1045.50 | 58.1 | oem |\n| 4.15.0-1047.49 | 58.1 | aws |\n| 4.15.0-1047.51 | 58.1 | azure |\n| 4.15.0-1048.50 | 58.1 | aws |\n| 4.15.0-1049.54 | 58.1 | azure |\n| 4.15.0-1050.52 | 58.1 | aws |\n| 4.15.0-1050.55 | 58.1 | azure |\n| 4.15.0-1050.57 | 58.1 | oem |\n| 4.15.0-1051.53 | 58.1 | aws |\n| 4.15.0-1051.56 | 58.1 | azure |\n| 4.15.0-1052.57 | 58.1 | azure |\n| 4.15.0-1055.60 | 58.1 | azure |\n| 4.15.0-1056.61 | 58.1 | azure |\n| 4.15.0-1056.65 | 58.1 | oem |\n| 4.15.0-1057.62 | 58.1 | azure |\n| 4.15.0-1057.66 | 58.1 | oem |\n| 4.15.0-1059.64 | 58.1 | azure |\n| 5.0.0-1014.14~18.04.1 | 58.1 | azure |\n| 5.0.0-1016.17~18.04.1 | 58.1 | azure |\n| 5.0.0-1018.19~18.04.1 | 58.1 | azure |\n| 5.0.0-1020.21~18.04.1 | 58.1 | azure |\n\nSupport Information:\n\nKernels older than the levels listed below do not receive livepatch\nupdates. Please upgrade your kernel as soon as possible. \n\n| Series | Version | Flavors |\n|------------------+------------------+--------------------------|\n| Ubuntu 18.04 LTS | 4.15.0-1039 | aws |\n| Ubuntu 16.04 LTS | 4.4.0-1083 | aws |\n| Ubuntu 18.04 LTS | 5.0.0-1000 | azure |\n| Ubuntu 16.04 LTS | 4.15.0-1047 | azure |\n| Ubuntu 18.04 LTS | 4.15.0-50 | generic lowlatency |\n| Ubuntu 16.04 LTS | 4.15.0-50 | generic lowlatency |\n| Ubuntu 14.04 LTS | 4.4.0-148 | generic lowlatency |\n| Ubuntu 18.04 LTS | 4.15.0-1038 | oem |\n| Ubuntu 16.04 LTS | 4.4.0-148 | generic lowlatency |\n\nReferences:\n CVE-2016-10905, CVE-2018-20856, CVE-2018-20961, CVE-2018-20976, \n CVE-2018-21008, CVE-2019-0136, CVE-2019-2054, CVE-2019-2181, \n CVE-2019-3846, CVE-2019-10126, CVE-2019-10207, CVE-2019-11477, \n CVE-2019-11478, CVE-2019-11833, CVE-2019-12614, CVE-2019-14283, \n CVE-2019-14284, CVE-2019-14814, CVE-2019-14815, CVE-2019-14816, \n CVE-2019-14821, CVE-2019-14835\n\n\n-- \nubuntu-security-announce mailing list\nubuntu-security-announce@lists.ubuntu.com\nModify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce\n. \n\n\nHere are the details from the Slackware 14.2 ChangeLog:\n+--------------------------+\npatches/packages/linux-4.4.199/*: Upgraded. \n These updates fix various bugs and security issues. \n If you use lilo to boot your machine, be sure lilo.conf points to the correct\n kernel and initrd and run lilo as root to update the bootloader. \n If you use elilo to boot your machine, you should run eliloconfig to copy the\n kernel and initrd to the EFI System Partition. \n For more information, see:\n Fixed in 4.4.191:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3900\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15118\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10906\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10905\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15117\n Fixed in 4.4.193:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14835\n Fixed in 4.4.194:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14816\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14814\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15505\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14821\n Fixed in 4.4.195:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17053\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17052\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17056\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17055\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17054\n Fixed in 4.4.196:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2215\n Fixed in 4.4.197:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16746\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20976\n Fixed in 4.4.198:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17075\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17133\n Fixed in 4.4.199:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15098\n (* Security fix *)\n+--------------------------+\n\n\nWhere to find the new packages:\n+-----------------------------+\n\nThanks to the friendly folks at the OSU Open Source Lab\n(http://osuosl.org) for donating FTP and rsync hosting\nto the Slackware project! :-)\n\nAlso see the \"Get Slack\" section on http://slackware.com for\nadditional mirror sites near you. \n\nUpdated packages for Slackware 14.2:\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-i586-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-smp-4.4.199_smp-i686-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199_smp-x86-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-i586-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-smp-4.4.199_smp-i686-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-i586-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-smp-4.4.199_smp-i686-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199_smp-noarch-1.txz\n\nUpdated packages for Slackware x86_64 14.2:\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-x86_64-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199-x86-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-x86_64-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-x86_64-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199-noarch-1.txz\n\n\nMD5 signatures:\n+-------------+\n\nSlackware 14.2 packages:\n\n0e523f42e759ecc2399f36e37672f110 kernel-generic-4.4.199-i586-1.txz\nee6451f5362008b46fee2e08e3077b21 kernel-generic-smp-4.4.199_smp-i686-1.txz\na8338ef88f2e3ea9c74d564c36ccd420 kernel-headers-4.4.199_smp-x86-1.txz\ncd9e9c241e4eec2fba1dae658a28870e kernel-huge-4.4.199-i586-1.txz\n842030890a424023817d42a83a86a7f4 kernel-huge-smp-4.4.199_smp-i686-1.txz\n257db024bb4501548ac9118dbd2d9ae6 kernel-modules-4.4.199-i586-1.txz\n96377cbaf7bca55aaca70358c63151a7 kernel-modules-smp-4.4.199_smp-i686-1.txz\n0673e86466f9e624964d95107cf6712f kernel-source-4.4.199_smp-noarch-1.txz\n\nSlackware x86_64 14.2 packages:\n6d1ff428e7cad6caa8860acc402447a1 kernel-generic-4.4.199-x86_64-1.txz\ndadc091dc725b8227e0d1e35098d6416 kernel-headers-4.4.199-x86-1.txz\nf5f4c034203f44dd1513ad3504c42515 kernel-huge-4.4.199-x86_64-1.txz\na5337cd8b2ca80d4d93b9e9688e42b03 kernel-modules-4.4.199-x86_64-1.txz\n5dd6e46c04f37b97062dc9e52cc38add kernel-source-4.4.199-noarch-1.txz\n\n\nInstallation instructions:\n+------------------------+\n\nUpgrade the packages as root:\n# upgradepkg kernel-*.txz\n\nIf you are using an initrd, you\u0027ll need to rebuild it. \n\nFor a 32-bit SMP machine, use this command (substitute the appropriate\nkernel version if you are not running Slackware 14.2):\n# /usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199-smp | bash\n\nFor a 64-bit machine, or a 32-bit uniprocessor machine, use this command\n(substitute the appropriate kernel version if you are not running\nSlackware 14.2):\n# /usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199 | bash\n\nPlease note that \"uniprocessor\" has to do with the kernel you are running,\nnot with the CPU. Most systems should run the SMP kernel (if they can)\nregardless of the number of cores the CPU has. If you aren\u0027t sure which\nkernel you are running, run \"uname -a\". If you see SMP there, you are\nrunning the SMP kernel and should use the 4.4.199-smp version when running\nmkinitrd_command_generator. Note that this is only for 32-bit -- 64-bit\nsystems should always use 4.4.199 as the version. \n\nIf you are using lilo or elilo to boot the machine, you\u0027ll need to ensure\nthat the machine is properly prepared before rebooting. \n\nIf using LILO:\nBy default, lilo.conf contains an image= line that references a symlink\nthat always points to the correct kernel. No editing should be required\nunless your machine uses a custom lilo.conf. If that is the case, be sure\nthat the image= line references the correct kernel file. Either way,\nyou\u0027ll need to run \"lilo\" as root to reinstall the boot loader. \n\nIf using elilo:\nEnsure that the /boot/vmlinuz symlink is pointing to the kernel you wish\nto use, and then run eliloconfig to update the EFI System Partition. \n\n\n+-----+\n\nSlackware Linux Security Team\nhttp://slackware.com/gpg-key\nsecurity@slackware.com\n\n+------------------------------------------------------------------------+\n| To leave the slackware-security mailing list: |\n+------------------------------------------------------------------------+\n| Send an email to majordomo@slackware.com with this text in the body of |\n| the email message: |\n| |\n| unsubscribe slackware-security |\n| |\n| You will get a confirmation message back containing instructions to |\n| complete the process. Please do not reply to this email address. 7) - aarch64, noarch, ppc64le\n\n3. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe following packages have been upgraded to a later upstream version:\nredhat-release-virtualization-host (4.2), redhat-virtualization-host (4.2). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: kernel security update\nAdvisory ID: RHSA-2019:2867-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2019:2867\nIssue date: 2019-09-23\nCVE Names: CVE-2019-14835\n====================================================================\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7.4\nAdvanced Update Support, Red Hat Enterprise Linux 7.4 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.4 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.4) - noarch, x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.4) - noarch, ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.4) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.4) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.4) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.4) - noarch, x86_64\n\n3. \n(CVE-2019-14835)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1750727 - CVE-2019-14835 kernel: vhost-net: guest to host kernel escape during migration\n\n6. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.4):\n\nSource:\nkernel-3.10.0-693.59.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-693.59.1.el7.noarch.rpm\nkernel-doc-3.10.0-693.59.1.el7.noarch.rpm\n\nx86_64:\nkernel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-devel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-headers-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-693.59.1.el7.x86_64.rpm\nperf-3.10.0-693.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.4):\n\nSource:\nkernel-3.10.0-693.59.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-693.59.1.el7.noarch.rpm\nkernel-doc-3.10.0-693.59.1.el7.noarch.rpm\n\nppc64le:\nkernel-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debug-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-devel-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-headers-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-tools-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-693.59.1.el7.ppc64le.rpm\nperf-3.10.0-693.59.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\npython-perf-3.10.0-693.59.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\n\nx86_64:\nkernel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-devel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-headers-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-693.59.1.el7.x86_64.rpm\nperf-3.10.0-693.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.4):\n\nSource:\nkernel-3.10.0-693.59.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-693.59.1.el7.noarch.rpm\nkernel-doc-3.10.0-693.59.1.el7.noarch.rpm\n\nx86_64:\nkernel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-devel-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-headers-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-693.59.1.el7.x86_64.rpm\nperf-3.10.0-693.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.4):\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-693.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.4):\n\nppc64le:\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-693.59.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.ppc64le.rpm\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-693.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.4):\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-693.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-693.59.1.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-14835\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/security/vulnerabilities/kernel-vhost\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2019 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXYi86tzjgjWX9erEAQi1YA//SFbLNTK8xgMJM4UXWUDM7d1LMS/f79kZ\nC+qG3/+OUqyltICprPUpLQOslAaLgY1C1slkqkxXXgNtfB4rBnE2gkDtSnqe0a7K\n60JapVVCpXuJnGozoFusUPgSvy7DM3RJ+xcz4SS6fNIuRGq1bDjOd4E3tcQwk1rh\nhqqPwPmY1uJK2y6UWiyF9Q2Dvug5mO2ZKqxgwuZPQyRXg14BVxDfXIY6+8SuuL3j\nYLVmCNqpErcoqeIaUAXUbGyATrUwni8J1RY4Q5lNDMq8u31Xlu/CSaEqU5OxtkeB\nRvfAuFebnJkUo+YkzouWWKF+eTEhpqxXnPZq2KM8zStGmRpVmreg16THa6CSpi81\nHDcsTNKFHsA3QWHosxramg1Z/RYSD+goD0lvnfEVvO4jaUW2f59q2PEu4riM897f\nL21luddDijVBYmzAngkrhKR0kiADLmU+p67nZ9EyNrR95XYaaJ6GRb5vSXBEgiXt\nvJJn8GuJAI5h13MzO5rAua82xTHxiQc28s5KCbCJFcp3DZo72mDIVE6MfGb1By5e\nT1IN17xPN7yVoy/c8WZ89unoNZNXMsTBhcWNNlXsXknALVaYi5i4AtWx3mOLP2gk\nc1Z7F9XG8niCTGSLvBs5gSAfjSFNB6YiU6NzX9Zlxb4/hr3HE20S+w3AGnvD2f05\nhz6LDDCc4xc=hk2d\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements",
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14835"
},
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154562"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154570"
},
{
"db": "PACKETSTORM",
"id": "154540"
}
],
"trust": 1.8
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2019-14835",
"trust": 2.0
},
{
"db": "PACKETSTORM",
"id": "155212",
"trust": 1.2
},
{
"db": "PACKETSTORM",
"id": "154951",
"trust": 1.2
},
{
"db": "PACKETSTORM",
"id": "154572",
"trust": 1.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/10/03/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/10/09/7",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/09/24/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/10/09/3",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/09/17/1",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "154570",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154602",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154562",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154514",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154540",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154659",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154539",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154538",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154513",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154566",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154563",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154564",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154565",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154541",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154558",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154585",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154569",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-201909-807",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-146821",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154562"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154570"
},
{
"db": "PACKETSTORM",
"id": "154540"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"id": "VAR-201909-0695",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
}
],
"trust": 0.35113123999999996
},
"last_update_date": "2024-07-23T20:01:54.800000Z",
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-120",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2830"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2862"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2867"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2901"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2924"
},
{
"trust": 1.1,
"url": "https://seclists.org/bugtraq/2019/sep/41"
},
{
"trust": 1.1,
"url": "https://seclists.org/bugtraq/2019/nov/11"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2019/dsa-4531"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/yw3qnmpenpfegvtofpsnobl7jeijs25p/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/kqfy6jyfiq2vfq7qcsxpwtul5zdncjl5/"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhba-2019:2824"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2827"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2828"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2829"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2854"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2863"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2864"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2865"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2866"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2869"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2889"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2899"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2900"
},
{
"trust": 1.1,
"url": "https://usn.ubuntu.com/4135-1/"
},
{
"trust": 1.1,
"url": "https://usn.ubuntu.com/4135-2/"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2019/09/msg00025.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2019/10/msg00000.html"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/09/24/1"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/10/03/1"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/10/09/3"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/10/09/7"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/154572/kernel-live-patch-security-notice-lsn-0056-1.html"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/154951/kernel-live-patch-security-notice-lsn-0058-1.html"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/155212/slackware-security-advisory-slackware-14.2-kernel-updates.html"
},
{
"trust": 1.1,
"url": "http://www.huawei.com/en/psirt/security-advisories/huawei-sa-20200115-01-qemu-en"
},
{
"trust": 1.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-14835"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20191031-0005/"
},
{
"trust": 1.1,
"url": "https://www.openwall.com/lists/oss-security/2019/09/17/1"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00064.html"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00066.html"
},
{
"trust": 0.9,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14835"
},
{
"trust": 0.5,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/vulnerabilities/kernel-vhost"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-14835"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14821"
},
{
"trust": 0.2,
"url": "https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10905"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14816"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20976"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14814"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe/5.0.0-29.31~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.4.0-1122.131"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.0.0-1017.18"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15030"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/4.15.0-1059.64"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/4.15.0-1044.46"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.0.0-1017.17"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/5.0.0-1017.17"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.0.0-29.31"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.4.0-1094.105"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-hwe/4.15.0-1050.52~16.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-4.15/4.15.0-1044.46"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.15.0-1047.51"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1050.52"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-64.73"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1025.28~16.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.0.0-1020.21~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/5.0.0-1021.22"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.0.0-1016.18"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.4.0-1058.65"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.4.0-1126.132"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem/4.15.0-1056.65"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.15.0-1046.46"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-5.0/5.0.0-1017.17~18.04.1"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4135-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1025.28"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/4.15.0-1044.70"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.4.0-164.192"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1064.71"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15031"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.0.0-1020.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe/4.15.0-64.73~16.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14815"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20856"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11478"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2181"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11477"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3846"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-21008"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10126"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14284"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14283"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11833"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2054"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-0136"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20961"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14835"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-2215"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17054"
},
{
"trust": 0.1,
"url": "http://slackware.com"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16746"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17055"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17075"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15118"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17053"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2016-10906"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10906"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2018-20976"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17052"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15117"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17133"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14816"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15505"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15098"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-16746"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17054"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2215"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15118"
},
{
"trust": 0.1,
"url": "http://slackware.com/gpg-key"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2016-10905"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17056"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-3900"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15117"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17056"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14821"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-10638"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15098"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17075"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17053"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3900"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10638"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17055"
},
{
"trust": 0.1,
"url": "http://osuosl.org)"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17133"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15505"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17052"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154562"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154570"
},
{
"db": "PACKETSTORM",
"id": "154540"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154562"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154570"
},
{
"db": "PACKETSTORM",
"id": "154540"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2019-09-17T00:00:00",
"db": "VULHUB",
"id": "VHN-146821"
},
{
"date": "2019-09-25T17:55:27",
"db": "PACKETSTORM",
"id": "154602"
},
{
"date": "2019-09-18T21:22:40",
"db": "PACKETSTORM",
"id": "154514"
},
{
"date": "2019-10-23T18:32:10",
"db": "PACKETSTORM",
"id": "154951"
},
{
"date": "2019-11-08T15:37:19",
"db": "PACKETSTORM",
"id": "155212"
},
{
"date": "2019-09-23T18:25:39",
"db": "PACKETSTORM",
"id": "154562"
},
{
"date": "2019-09-30T04:44:44",
"db": "PACKETSTORM",
"id": "154659"
},
{
"date": "2019-09-23T18:31:46",
"db": "PACKETSTORM",
"id": "154572"
},
{
"date": "2019-09-23T18:27:09",
"db": "PACKETSTORM",
"id": "154570"
},
{
"date": "2019-09-20T14:57:53",
"db": "PACKETSTORM",
"id": "154540"
},
{
"date": "2019-09-17T16:15:10.980000",
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-12T00:00:00",
"db": "VULHUB",
"id": "VHN-146821"
},
{
"date": "2023-12-15T15:29:09.587000",
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154951"
}
],
"trust": 0.2
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2019-2901-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "154602"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "overflow",
"sources": [
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154562"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154570"
},
{
"db": "PACKETSTORM",
"id": "154540"
}
],
"trust": 0.5
}
}
VAR-202201-0496
Vulnerability from variot - Updated: 2024-07-23 19:59An unprivileged write to the file handler flaw in the Linux kernel's control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. Linux Kernel There is an authentication vulnerability in.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Attackers can use this vulnerability to bypass the restrictions of the Linux kernel through Cgroup Fd Writing to elevate their privileges. ========================================================================== Ubuntu Security Notice USN-5368-1 April 06, 2022
linux-azure-5.13, linux-oracle-5.13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-azure-5.13: Linux kernel for Microsoft Azure cloud systems - linux-oracle-5.13: Linux kernel for Oracle Cloud systems
Details:
It was discovered that the BPF verifier in the Linux kernel did not properly restrict pointer types in certain situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-1055)
Yiqi Sun and Kevin Wang discovered that the cgroups implementation in the Linux kernel did not properly restrict access to the cgroups v1 release_agent feature. (CVE-2022-0492)
J\xfcrgen Gro\xdf discovered that the Xen subsystem within the Linux kernel did not adequately limit the number of events driver domains (unprivileged PV backends) could send to other guest VMs. An attacker in a driver domain could use this to cause a denial of service in other guest VMs. (CVE-2021-28711, CVE-2021-28712, CVE-2021-28713)
J\xfcrgen Gro\xdf discovered that the Xen network backend driver in the Linux kernel did not adequately limit the amount of queued packets when a guest did not process them. An attacker in a guest VM can use this to cause a denial of service (excessive kernel memory consumption) in the network backend domain. (CVE-2021-28714, CVE-2021-28715)
Szymon Heidrich discovered that the USB Gadget subsystem in the Linux kernel did not properly restrict the size of control requests for certain gadget types, leading to possible out of bounds reads or writes. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-39698)
It was discovered that the simulated networking device driver for the Linux kernel did not properly initialize memory in certain situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2021-4135)
Eric Biederman discovered that the cgroup process migration implementation in the Linux kernel did not perform permission checks correctly in some situations. (CVE-2021-4197)
Brendan Dolan-Gavitt discovered that the aQuantia AQtion Ethernet device driver in the Linux kernel did not properly validate meta-data coming from the device. A local attacker who can control an emulated device can use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-43975)
It was discovered that the ARM Trusted Execution Environment (TEE) subsystem in the Linux kernel contained a race condition leading to a use- after-free vulnerability. A local attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2021-44733)
It was discovered that the Phone Network protocol (PhoNet) implementation in the Linux kernel did not properly perform reference counting in some error conditions. A local attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2021-45095)
It was discovered that the eBPF verifier in the Linux kernel did not properly perform bounds checking on mov32 operations. A local attacker could use this to expose sensitive information (kernel pointer addresses). (CVE-2021-45402)
It was discovered that the Reliable Datagram Sockets (RDS) protocol implementation in the Linux kernel did not properly deallocate memory in some error conditions. A local attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2021-45480)
It was discovered that the BPF subsystem in the Linux kernel did not properly track pointer types on atomic fetch operations in some situations. A local attacker could use this to expose sensitive information (kernel pointer addresses). (CVE-2022-0264)
It was discovered that the TIPC Protocol implementation in the Linux kernel did not properly initialize memory in some situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2022-0382)
Samuel Page discovered that the Transparent Inter-Process Communication (TIPC) protocol implementation in the Linux kernel contained a stack-based buffer overflow. A remote attacker could use this to cause a denial of service (system crash) for systems that have a TIPC bearer configured. (CVE-2022-0435)
It was discovered that the KVM implementation for s390 systems in the Linux kernel did not properly prevent memory operations on PVM guests that were in non-protected mode. A local attacker could use this to obtain unauthorized memory write access. (CVE-2022-0516)
It was discovered that the ICMPv6 implementation in the Linux kernel did not properly deallocate memory in certain situations. A remote attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2022-0742)
It was discovered that the IPsec implementation in the Linux kernel did not properly allocate enough memory when performing ESP transformations, leading to a heap-based buffer overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-27666)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.13.0-1021-azure 5.13.0-1021.24~20.04.1 linux-image-5.13.0-1025-oracle 5.13.0-1025.30~20.04.1 linux-image-azure 5.13.0.1021.24~20.04.10 linux-image-oracle 5.13.0.1025.30~20.04.1
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-5368-1 CVE-2021-28711, CVE-2021-28712, CVE-2021-28713, CVE-2021-28714, CVE-2021-28715, CVE-2021-39685, CVE-2021-39698, CVE-2021-4135, CVE-2021-4197, CVE-2021-43975, CVE-2021-44733, CVE-2021-45095, CVE-2021-45402, CVE-2021-45480, CVE-2022-0264, CVE-2022-0382, CVE-2022-0435, CVE-2022-0492, CVE-2022-0516, CVE-2022-0742, CVE-2022-1055, CVE-2022-23222, CVE-2022-27666
Package Information: https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1 https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1
. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.5 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
dset: Prototype Pollution in dset (CVE-2022-25645)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
Bug fixes:
-
Trying to create a new cluster on vSphere and no feedback, stuck in "creating" (BZ# 1937078)
-
Wrong message is displayed when GRC fails to connect to an Ansible Tower (BZ# 2051752)
-
multicluster_operators_hub_subscription issues due to /tmp usage (BZ# 2052702)
-
Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field (BZ# 2054954)
-
Changing the multiclusterhub name other than the default name keeps the version in the web console loading (BZ# 2059822)
-
search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade (BZ# 2065318)
-
Uninstall pod crashed when destroying Azure Gov cluster in ACM (BZ# 2073562)
-
Deprovisioned clusters not filtered out by discovery controller (BZ# 2075594)
-
When deleting a secret for a Helm application, duplicate errors show up in topology (BZ# 2075675)
-
Changing existing placement rules does not change YAML file Regression (BZ# 2075724)
-
Editing Helm Argo Applications does not Prune Old Resources (BZ# 2079906)
-
Failed to delete the requested resource [404] error appears after subscription is deleted and its placement rule is used in the second subscription (BZ# 2080713)
-
Typo in the logs when Deployable is updated in the subscription namespace (BZ# 2080960)
-
After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters (BZ# 2080716)
-
RHACM 2.4.5 images (BZ# 2081438)
-
Performance issue to get secret in claim-controller (BZ# 2081908)
-
Failed to provision openshift 4.10 on bare metal (BZ# 2094109)
-
Bugs fixed (https://bugzilla.redhat.com/):
1937078 - Trying to create a new cluster on vSphere and no feedback, stuck in "creating" 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2051752 - Wrong message is displayed when GRC fails to connect to an ansible tower 2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account 2052702 - multicluster_operators_hub_subscription issues due to /tmp usage 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements 2054954 - Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field 2059822 - Changing the multiclusterhub name other than the default name keeps the version in the web console loading. 2065318 - search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2073562 - Uninstall pod crashed when destroying Azure Gov cluster in ACM 2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store 2075594 - Deprovisioned clusters not filtered out by discovery controller 2075675 - When deleting a secret for a Helm application, duplicate errors show up in topology 2075724 - Changing existing placement rules does not change YAML file 2079906 - Editing Helm Argo Applications does not Prune Old Resources 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080713 - Failed to delete the requested resource [404] error appears after subscription is deleted and it's placement rule is used in the second subscription [Upgrade] 2080716 - After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters 2080847 - CVE-2022-25645 dset: Prototype Pollution in dset 2080960 - Typo in the logs when Deployable is updated in the subscription namespace 2081438 - RHACM 2.4.5 images 2081908 - Performance issue to get secret in claim-controller 2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group 2094109 - Failed to provision openshift 4.10 on bare metal
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel security and bug fix update Advisory ID: RHSA-2022:5626-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:5626 Issue date: 2022-07-19 CVE Names: CVE-2020-29368 CVE-2021-4197 CVE-2021-4203 CVE-2022-1012 CVE-2022-1729 CVE-2022-32250 =====================================================================
- Summary:
An update for kernel is now available for Red Hat Enterprise Linux 8.4 Extended Update Support.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat CodeReady Linux Builder EUS (v. 8.4) - aarch64, ppc64le, x86_64 Red Hat Enterprise Linux BaseOS EUS (v.8.4) - aarch64, noarch, ppc64le, s390x, x86_64
Security Fix(es):
-
kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak (CVE-2022-1012)
-
kernel: race condition in perf_event_open leads to privilege escalation (CVE-2022-1729)
-
kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root (CVE-2022-32250)
-
kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)
-
kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)
-
kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check (CVE-2020-29368)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Failed to reboot after crash trigger (BZ#2060747)
-
conntrack entries linger around after test (BZ#2066357)
-
Enable nested virtualization (BZ#2079070)
-
slub corruption during LPM of hnv interface (BZ#2081251)
-
sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35 (BZ#2082091)
-
Backport request of "genirq: use rcu in kstat_irqs_usr()" (BZ#2083309)
-
ethtool -L may cause system to hang (BZ#2083323)
-
For isolated CPUs (with nohz_full enabled for isolated CPUs) CPU utilization statistics are not getting reflected continuously (BZ#2084139)
-
Affinity broken due to vector space exhaustion (BZ#2084647)
-
kernel memory leak while freeing nested actions (BZ#2086597)
-
sync rhel-8.6 with upstream 5.13 through 5.16 fixes and improvements (BZ#2088037)
-
Kernel panic possibly when cleaning namespace on pod deletion (BZ#2089539)
-
Softirq hrtimers are being placed on the per-CPU softirq clocks on isolcpu’s. (BZ#2090485)
-
fix missed wake-ups in rq_qos_throttle try two (BZ#2092076)
-
NFS4 client experiencing IO outages while sending duplicate SYNs and erroneous RSTs during connection reestablishment (BZ#2094334)
-
using __this_cpu_read() in preemptible [00000000] code: kworker/u66:1/937154 (BZ#2095775)
-
Need some changes in RHEL8.x kernels. (BZ#2096932)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak 2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation 2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root
- Package List:
Red Hat Enterprise Linux BaseOS EUS (v.8.4):
Source: kernel-4.18.0-305.57.1.el8_4.src.rpm
aarch64: bpftool-4.18.0-305.57.1.el8_4.aarch64.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-core-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-devel-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-headers-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-modules-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-libs-4.18.0-305.57.1.el8_4.aarch64.rpm perf-4.18.0-305.57.1.el8_4.aarch64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm python3-perf-4.18.0-305.57.1.el8_4.aarch64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm
noarch: kernel-abi-stablelists-4.18.0-305.57.1.el8_4.noarch.rpm kernel-doc-4.18.0-305.57.1.el8_4.noarch.rpm
ppc64le: bpftool-4.18.0-305.57.1.el8_4.ppc64le.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-core-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-libs-4.18.0-305.57.1.el8_4.ppc64le.rpm perf-4.18.0-305.57.1.el8_4.ppc64le.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm python3-perf-4.18.0-305.57.1.el8_4.ppc64le.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm
s390x: bpftool-4.18.0-305.57.1.el8_4.s390x.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-core-4.18.0-305.57.1.el8_4.s390x.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debuginfo-common-s390x-4.18.0-305.57.1.el8_4.s390x.rpm kernel-devel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-headers-4.18.0-305.57.1.el8_4.s390x.rpm kernel-modules-4.18.0-305.57.1.el8_4.s390x.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm kernel-tools-4.18.0-305.57.1.el8_4.s390x.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-core-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-devel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-modules-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm perf-4.18.0-305.57.1.el8_4.s390x.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm python3-perf-4.18.0-305.57.1.el8_4.s390x.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm
x86_64: bpftool-4.18.0-305.57.1.el8_4.x86_64.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-core-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-devel-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-headers-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-modules-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-libs-4.18.0-305.57.1.el8_4.x86_64.rpm perf-4.18.0-305.57.1.el8_4.x86_64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm python3-perf-4.18.0-305.57.1.el8_4.x86_64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm
Red Hat CodeReady Linux Builder EUS (v. 8.4):
aarch64: bpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-libs-devel-4.18.0-305.57.1.el8_4.aarch64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm
ppc64le: bpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-libs-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm
x86_64: bpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-libs-devel-4.18.0-305.57.1.el8_4.x86_64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYuFkSdzjgjWX9erEAQhDCxAAknsy8K3eg1J603gMndUGWfI/Fs5VzIaH lxGavTw8H57lXRWbQYqJoRKk42uHAH2iCicovyvowJ5SdfnChtAVbG1A1wjJLmJJ 0YDKoeMn3s1jjThivm5rWGQVdImqLw+CxVvb3Pywv6ZswTI5r4ZB4FEXW8GIR1w2 1FeHTcwUgNLzeBLdVem1T50lWERG0j0ZGUmv9mu4QMDeWXoSoPcHKWnsmLgDvQif dVky3UsFoCJ783WJOIctmY97kOffqIDvZdbPwajAyTByspumtcwt6N7wMU6VfI+u B6bRGQgLbElY6IniLUsV7MG8GbbffZvPFNN/n6LdnnFgEt1eDlo6LkZCyPaMbEfx 2dMxJtcAiXmydMs5QXvNJ3y2UR2fp/iHF8euAnSN3eKTLAxDQwo3c4KvNUKAfFcF OAjbyLTilLhiPHRARG4aEWCEUSmfzO3rulNhRcIEWtNIira3/QMFG9qUjNAMvzU1 M4tMSPkH35gx49p2a6arZceUGDXiRwvrP142GzpAgRWt/GrydjAsRiG4pJM2H5TW nB5q7OuwEvch8+o8gJril5uOpm6eI1lylv9wTXbwjpzQqL5k2JcgByRWx8wLqYXy wXsBm+JZL9ztSadqoVsFWSqC0yeRkuF185F4gI7+7azjpeQhHtJEix3bgqRhIzK4 07JERnC1IRg= =y1sh -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.9.45. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2022:5878
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
Security Fix(es):
- openshift: oauth-serving-cert configmap contains cluster certificate private key (CVE-2022-2403)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-x86_64
The image digest is sha256:8ab373599e8a010dffb9c7ed45e01c00cb06a7857fe21de102d978be4738b2ec
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-s390x
The image digest is sha256:1dde8a7134081c82012a812e014daca4cba1095630e6d0c74b51da141d472984
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-ppc64le
The image digest is sha256:ec1fac628bec05eb6425c2ae9dcd3fca120cd1a8678155350bb4c65813cfc30e
All OpenShift Container Platform 4.9 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2009024 - Unable to complete cluster destruction, some ports are left over 2055494 - console operator should report Upgradeable False when SAN-less certs are used 2083554 - post 1.23 rebase: regression in service-load balancer reliability 2087021 - configure-ovs.sh fails, blocking new RHEL node from being scaled up on cluster without manual reboot 2088539 - Openshift route URLs starting with double slashes stopped working after update to 4.8.33 - curl version problems 2091806 - Cluster upgrade stuck due to "resource deletions in progress" 2095320 - [4.9] Bootimage bump tracker 2097157 - [4.9z] During ovnkube-node restart all host conntrack entries are flushed, leading to traffic disruption 2100786 - [OCP 4.9] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2101664 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2101959 - CVE-2022-2403 openshift: oauth-serving-cert configmap contains cluster certificate private key 2103982 - [4.9] AWS EBS CSI driver stuck removing EBS volumes - GetDeviceMountRefs check failed 2105277 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2105453 - Node reboot causes duplicate persistent volumes 2105654 - egressIP panics with nil pointer dereference 2105663 - APIRequestCount does not identify some APIs removed in 4.9 2106655 - Kubelet slowly leaking memory and pods eventually unable to start 2108538 - [4.9.z backport] br-ex not created due to default bond interface having a different mac address than expected 2108619 - ClusterVersion history pruner does not always retain initial completed update entry
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202201-0496",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.15.14"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.276"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.5"
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.11"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.15"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.238"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.4.189"
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.10.111"
},
{
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.2"
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.20"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"model": "oracle communications cloud native core binding support function",
"scope": null,
"trust": 0.8,
"vendor": "\u30aa\u30e9\u30af\u30eb",
"version": null
},
{
"model": "brocade fabric os",
"scope": null,
"trust": 0.8,
"vendor": "broadcom",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.4.189",
"versionStartIncluding": "4.20",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.19.238",
"versionStartIncluding": "4.15",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "4.14.276",
"versionStartIncluding": "4.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.10.111",
"versionStartIncluding": "5.5",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.15.14",
"versionStartIncluding": "5.11",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.3:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167852"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "168019"
}
],
"trust": 0.5
},
"cve": "CVE-2021-4197",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "HIGH",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Local",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Complete",
"baseScore": 7.2,
"confidentialityImpact": "Complete",
"exploitabilityScore": null,
"id": "CVE-2021-4197",
"impactScore": null,
"integrityImpact": "Complete",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "High",
"trust": 0.8,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "VHN-410862",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.8,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-4197",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-4197",
"trust": 1.8,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-410862",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "An unprivileged write to the file handler flaw in the Linux kernel\u0027s control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. Linux Kernel There is an authentication vulnerability in.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Attackers can use this vulnerability to bypass the restrictions of the Linux kernel through Cgroup Fd Writing to elevate their privileges. ==========================================================================\nUbuntu Security Notice USN-5368-1\nApril 06, 2022\n\nlinux-azure-5.13, linux-oracle-5.13 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-azure-5.13: Linux kernel for Microsoft Azure cloud systems\n- linux-oracle-5.13: Linux kernel for Oracle Cloud systems\n\nDetails:\n\nIt was discovered that the BPF verifier in the Linux kernel did not\nproperly restrict pointer types in certain situations. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2022-1055)\n\nYiqi Sun and Kevin Wang discovered that the cgroups implementation in the\nLinux kernel did not properly restrict access to the cgroups v1\nrelease_agent feature. (CVE-2022-0492)\n\nJ\\xfcrgen Gro\\xdf discovered that the Xen subsystem within the Linux kernel did\nnot adequately limit the number of events driver domains (unprivileged PV\nbackends) could send to other guest VMs. An attacker in a driver domain\ncould use this to cause a denial of service in other guest VMs. \n(CVE-2021-28711, CVE-2021-28712, CVE-2021-28713)\n\nJ\\xfcrgen Gro\\xdf discovered that the Xen network backend driver in the Linux\nkernel did not adequately limit the amount of queued packets when a guest\ndid not process them. An attacker in a guest VM can use this to cause a\ndenial of service (excessive kernel memory consumption) in the network\nbackend domain. (CVE-2021-28714, CVE-2021-28715)\n\nSzymon Heidrich discovered that the USB Gadget subsystem in the Linux\nkernel did not properly restrict the size of control requests for certain\ngadget types, leading to possible out of bounds reads or writes. A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2021-39698)\n\nIt was discovered that the simulated networking device driver for the Linux\nkernel did not properly initialize memory in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). \n(CVE-2021-4135)\n\nEric Biederman discovered that the cgroup process migration implementation\nin the Linux kernel did not perform permission checks correctly in some\nsituations. (CVE-2021-4197)\n\nBrendan Dolan-Gavitt discovered that the aQuantia AQtion Ethernet device\ndriver in the Linux kernel did not properly validate meta-data coming from\nthe device. A local attacker who can control an emulated device can use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2021-43975)\n\nIt was discovered that the ARM Trusted Execution Environment (TEE)\nsubsystem in the Linux kernel contained a race condition leading to a use-\nafter-free vulnerability. A local attacker could use this to cause a denial\nof service or possibly execute arbitrary code. (CVE-2021-44733)\n\nIt was discovered that the Phone Network protocol (PhoNet) implementation\nin the Linux kernel did not properly perform reference counting in some\nerror conditions. A local attacker could possibly use this to cause a\ndenial of service (memory exhaustion). (CVE-2021-45095)\n\nIt was discovered that the eBPF verifier in the Linux kernel did not\nproperly perform bounds checking on mov32 operations. A local attacker\ncould use this to expose sensitive information (kernel pointer addresses). \n(CVE-2021-45402)\n\nIt was discovered that the Reliable Datagram Sockets (RDS) protocol\nimplementation in the Linux kernel did not properly deallocate memory in\nsome error conditions. A local attacker could possibly use this to cause a\ndenial of service (memory exhaustion). (CVE-2021-45480)\n\nIt was discovered that the BPF subsystem in the Linux kernel did not\nproperly track pointer types on atomic fetch operations in some situations. \nA local attacker could use this to expose sensitive information (kernel\npointer addresses). (CVE-2022-0264)\n\nIt was discovered that the TIPC Protocol implementation in the Linux kernel\ndid not properly initialize memory in some situations. A local attacker\ncould use this to expose sensitive information (kernel memory). \n(CVE-2022-0382)\n\nSamuel Page discovered that the Transparent Inter-Process Communication\n(TIPC) protocol implementation in the Linux kernel contained a stack-based\nbuffer overflow. A remote attacker could use this to cause a denial of\nservice (system crash) for systems that have a TIPC bearer configured. \n(CVE-2022-0435)\n\nIt was discovered that the KVM implementation for s390 systems in the Linux\nkernel did not properly prevent memory operations on PVM guests that were\nin non-protected mode. A local attacker could use this to obtain\nunauthorized memory write access. (CVE-2022-0516)\n\nIt was discovered that the ICMPv6 implementation in the Linux kernel did\nnot properly deallocate memory in certain situations. A remote attacker\ncould possibly use this to cause a denial of service (memory exhaustion). \n(CVE-2022-0742)\n\nIt was discovered that the IPsec implementation in the Linux kernel did not\nproperly allocate enough memory when performing ESP transformations,\nleading to a heap-based buffer overflow. A local attacker could use this to\ncause a denial of service (system crash) or possibly execute arbitrary\ncode. (CVE-2022-27666)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.13.0-1021-azure 5.13.0-1021.24~20.04.1\n linux-image-5.13.0-1025-oracle 5.13.0-1025.30~20.04.1\n linux-image-azure 5.13.0.1021.24~20.04.10\n linux-image-oracle 5.13.0.1025.30~20.04.1\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-5368-1\n CVE-2021-28711, CVE-2021-28712, CVE-2021-28713, CVE-2021-28714,\n CVE-2021-28715, CVE-2021-39685, CVE-2021-39698, CVE-2021-4135,\n CVE-2021-4197, CVE-2021-43975, CVE-2021-44733, CVE-2021-45095,\n CVE-2021-45402, CVE-2021-45480, CVE-2022-0264, CVE-2022-0382,\n CVE-2022-0435, CVE-2022-0492, CVE-2022-0516, CVE-2022-0742,\n CVE-2022-1055, CVE-2022-23222, CVE-2022-27666\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1\n https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1\n\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.5 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* dset: Prototype Pollution in dset (CVE-2022-25645)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nBug fixes:\n\n* Trying to create a new cluster on vSphere and no feedback, stuck in\n\"creating\" (BZ# 1937078)\n\n* Wrong message is displayed when GRC fails to connect to an Ansible Tower\n(BZ# 2051752)\n\n* multicluster_operators_hub_subscription issues due to /tmp usage (BZ#\n2052702)\n\n* Create Cluster, Worker Pool 2 zones do not load options that relate to\nthe selected Region field (BZ# 2054954)\n\n* Changing the multiclusterhub name other than the default name keeps the\nversion in the web console loading (BZ# 2059822)\n\n* search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade\n(BZ# 2065318)\n\n* Uninstall pod crashed when destroying Azure Gov cluster in ACM (BZ#\n2073562)\n\n* Deprovisioned clusters not filtered out by discovery controller (BZ#\n2075594)\n\n* When deleting a secret for a Helm application, duplicate errors show up\nin topology (BZ# 2075675)\n\n* Changing existing placement rules does not change YAML file Regression\n(BZ# 2075724)\n\n* Editing Helm Argo Applications does not Prune Old Resources (BZ# 2079906)\n\n* Failed to delete the requested resource [404] error appears after\nsubscription is deleted and its placement rule is used in the second\nsubscription (BZ# 2080713)\n\n* Typo in the logs when Deployable is updated in the subscription namespace\n(BZ# 2080960)\n\n* After Argo App Sets are created in an Upgraded Environment, the Clusters\ncolumn does not indicate the clusters (BZ# 2080716)\n\n* RHACM 2.4.5 images (BZ# 2081438)\n\n* Performance issue to get secret in claim-controller (BZ# 2081908)\n\n* Failed to provision openshift 4.10 on bare metal (BZ# 2094109)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937078 - Trying to create a new cluster on vSphere and no feedback, stuck in \"creating\"\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2051752 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2052702 - multicluster_operators_hub_subscription issues due to /tmp usage\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2054954 - Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field\n2059822 - Changing the multiclusterhub name other than the default name keeps the version in the web console loading. \n2065318 - search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073562 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2075594 - Deprovisioned clusters not filtered out by discovery controller\n2075675 - When deleting a secret for a Helm application, duplicate errors show up in topology\n2075724 - Changing existing placement rules does not change YAML file\n2079906 - Editing Helm Argo Applications does not Prune Old Resources\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080713 - Failed to delete the requested resource [404] error appears after subscription is deleted and it\u0027s placement rule is used in the second subscription [Upgrade]\n2080716 - After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters\n2080847 - CVE-2022-25645 dset: Prototype Pollution in dset\n2080960 - Typo in the logs when Deployable is updated in the subscription namespace\n2081438 - RHACM 2.4.5 images\n2081908 - Performance issue to get secret in claim-controller\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2094109 - Failed to provision openshift 4.10 on bare metal\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel security and bug fix update\nAdvisory ID: RHSA-2022:5626-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:5626\nIssue date: 2022-07-19\nCVE Names: CVE-2020-29368 CVE-2021-4197 CVE-2021-4203 \n CVE-2022-1012 CVE-2022-1729 CVE-2022-32250 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 8.4\nExtended Update Support. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder EUS (v. 8.4) - aarch64, ppc64le, x86_64\nRed Hat Enterprise Linux BaseOS EUS (v.8.4) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nSecurity Fix(es):\n\n* kernel: Small table perturb size in the TCP source port generation\nalgorithm can lead to information leak (CVE-2022-1012)\n\n* kernel: race condition in perf_event_open leads to privilege escalation\n(CVE-2022-1729)\n\n* kernel: a use-after-free write in the netfilter subsystem can lead to\nprivilege escalation to root (CVE-2022-32250)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: the copy-on-write implementation can grant unintended write\naccess because of a race condition in a THP mapcount check (CVE-2020-29368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* Failed to reboot after crash trigger (BZ#2060747)\n\n* conntrack entries linger around after test (BZ#2066357)\n\n* Enable nested virtualization (BZ#2079070)\n\n* slub corruption during LPM of hnv interface (BZ#2081251)\n\n* sleeping function called from invalid context at\nkernel/locking/spinlock_rt.c:35 (BZ#2082091)\n\n* Backport request of \"genirq: use rcu in kstat_irqs_usr()\" (BZ#2083309)\n\n* ethtool -L may cause system to hang (BZ#2083323)\n\n* For isolated CPUs (with nohz_full enabled for isolated CPUs) CPU\nutilization statistics are not getting reflected continuously (BZ#2084139)\n\n* Affinity broken due to vector space exhaustion (BZ#2084647)\n\n* kernel memory leak while freeing nested actions (BZ#2086597)\n\n* sync rhel-8.6 with upstream 5.13 through 5.16 fixes and improvements\n(BZ#2088037)\n\n* Kernel panic possibly when cleaning namespace on pod deletion\n(BZ#2089539)\n\n* Softirq hrtimers are being placed on the per-CPU softirq clocks on\nisolcpu\u2019s. (BZ#2090485)\n\n* fix missed wake-ups in rq_qos_throttle try two (BZ#2092076)\n\n* NFS4 client experiencing IO outages while sending duplicate SYNs and\nerroneous RSTs during connection reestablishment (BZ#2094334)\n\n* using __this_cpu_read() in preemptible [00000000] code:\nkworker/u66:1/937154 (BZ#2095775)\n\n* Need some changes in RHEL8.x kernels. (BZ#2096932)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak\n2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation\n2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS EUS (v.8.4):\n\nSource:\nkernel-4.18.0-305.57.1.el8_4.src.rpm\n\naarch64:\nbpftool-4.18.0-305.57.1.el8_4.aarch64.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-core-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-libs-4.18.0-305.57.1.el8_4.aarch64.rpm\nperf-4.18.0-305.57.1.el8_4.aarch64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\npython3-perf-4.18.0-305.57.1.el8_4.aarch64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\n\nnoarch:\nkernel-abi-stablelists-4.18.0-305.57.1.el8_4.noarch.rpm\nkernel-doc-4.18.0-305.57.1.el8_4.noarch.rpm\n\nppc64le:\nbpftool-4.18.0-305.57.1.el8_4.ppc64le.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-core-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-libs-4.18.0-305.57.1.el8_4.ppc64le.rpm\nperf-4.18.0-305.57.1.el8_4.ppc64le.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\npython3-perf-4.18.0-305.57.1.el8_4.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\n\ns390x:\nbpftool-4.18.0-305.57.1.el8_4.s390x.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-core-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debuginfo-common-s390x-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-core-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-devel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-modules-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm\nperf-4.18.0-305.57.1.el8_4.s390x.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\npython3-perf-4.18.0-305.57.1.el8_4.s390x.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\n\nx86_64:\nbpftool-4.18.0-305.57.1.el8_4.x86_64.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-core-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-libs-4.18.0-305.57.1.el8_4.x86_64.rpm\nperf-4.18.0-305.57.1.el8_4.x86_64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\npython3-perf-4.18.0-305.57.1.el8_4.x86_64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\n\nRed Hat CodeReady Linux Builder EUS (v. 8.4):\n\naarch64:\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-libs-devel-4.18.0-305.57.1.el8_4.aarch64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\n\nppc64le:\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-libs-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-libs-devel-4.18.0-305.57.1.el8_4.x86_64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYuFkSdzjgjWX9erEAQhDCxAAknsy8K3eg1J603gMndUGWfI/Fs5VzIaH\nlxGavTw8H57lXRWbQYqJoRKk42uHAH2iCicovyvowJ5SdfnChtAVbG1A1wjJLmJJ\n0YDKoeMn3s1jjThivm5rWGQVdImqLw+CxVvb3Pywv6ZswTI5r4ZB4FEXW8GIR1w2\n1FeHTcwUgNLzeBLdVem1T50lWERG0j0ZGUmv9mu4QMDeWXoSoPcHKWnsmLgDvQif\ndVky3UsFoCJ783WJOIctmY97kOffqIDvZdbPwajAyTByspumtcwt6N7wMU6VfI+u\nB6bRGQgLbElY6IniLUsV7MG8GbbffZvPFNN/n6LdnnFgEt1eDlo6LkZCyPaMbEfx\n2dMxJtcAiXmydMs5QXvNJ3y2UR2fp/iHF8euAnSN3eKTLAxDQwo3c4KvNUKAfFcF\nOAjbyLTilLhiPHRARG4aEWCEUSmfzO3rulNhRcIEWtNIira3/QMFG9qUjNAMvzU1\nM4tMSPkH35gx49p2a6arZceUGDXiRwvrP142GzpAgRWt/GrydjAsRiG4pJM2H5TW\nnB5q7OuwEvch8+o8gJril5uOpm6eI1lylv9wTXbwjpzQqL5k2JcgByRWx8wLqYXy\nwXsBm+JZL9ztSadqoVsFWSqC0yeRkuF185F4gI7+7azjpeQhHtJEix3bgqRhIzK4\n07JERnC1IRg=\n=y1sh\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.9.45. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:5878\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nSecurity Fix(es):\n\n* openshift: oauth-serving-cert configmap contains cluster certificate\nprivate key (CVE-2022-2403)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-x86_64\n\nThe image digest is\nsha256:8ab373599e8a010dffb9c7ed45e01c00cb06a7857fe21de102d978be4738b2ec\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-s390x\n\nThe image digest is\nsha256:1dde8a7134081c82012a812e014daca4cba1095630e6d0c74b51da141d472984\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-ppc64le\n\nThe image digest is\nsha256:ec1fac628bec05eb6425c2ae9dcd3fca120cd1a8678155350bb4c65813cfc30e\n\nAll OpenShift Container Platform 4.9 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2009024 - Unable to complete cluster destruction, some ports are left over\n2055494 - console operator should report Upgradeable False when SAN-less certs are used\n2083554 - post 1.23 rebase: regression in service-load balancer reliability\n2087021 - configure-ovs.sh fails, blocking new RHEL node from being scaled up on cluster without manual reboot\n2088539 - Openshift route URLs starting with double slashes stopped working after update to 4.8.33 - curl version problems\n2091806 - Cluster upgrade stuck due to \"resource deletions in progress\"\n2095320 - [4.9] Bootimage bump tracker\n2097157 - [4.9z] During ovnkube-node restart all host conntrack entries are flushed, leading to traffic disruption\n2100786 - [OCP 4.9] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2101664 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2101959 - CVE-2022-2403 openshift: oauth-serving-cert configmap contains cluster certificate private key\n2103982 - [4.9] AWS EBS CSI driver stuck removing EBS volumes - GetDeviceMountRefs check failed\n2105277 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2105453 - Node reboot causes duplicate persistent volumes\n2105654 - egressIP panics with nil pointer dereference\n2105663 - APIRequestCount does not identify some APIs removed in 4.9\n2106655 - Kubelet slowly leaking memory and pods eventually unable to start\n2108538 - [4.9.z backport] br-ex not created due to default bond interface having a different mac address than expected\n2108619 - ClusterVersion history pruner does not always retain initial completed update entry\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-4197"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "PACKETSTORM",
"id": "166636"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167852"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
}
],
"trust": 2.34
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-410862",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-4197",
"trust": 3.4
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "168019",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167886",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167852",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167694",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167746",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167443",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168136",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166392",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167097",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167952",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167748",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167822",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167714",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167072",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202201-1396",
"trust": 0.1
},
{
"db": "CNVD",
"id": "CNVD-2022-68560",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-410862",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166636",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167602",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167622",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167679",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "PACKETSTORM",
"id": "166636"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167852"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"id": "VAR-202201-0496",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
}
],
"trust": 0.725
},
"last_update_date": "2024-07-23T19:59:00.365000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "NTAP-20220602-0006 Oracle Oracle\u00a0Critical\u00a0Patch\u00a0Update",
"trust": 0.8,
"url": "https://www.broadcom.com/"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-287",
"trust": 1.1
},
{
"problemtype": "Inappropriate authentication (CWE-287) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.9,
"url": "https://www.debian.org/security/2022/dsa-5127"
},
{
"trust": 1.9,
"url": "https://www.debian.org/security/2022/dsa-5173"
},
{
"trust": 1.9,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2035652"
},
{
"trust": 1.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220602-0006/"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.0,
"url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj%40kernel.org/t/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-4197"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-4203"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3752"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4157"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3744"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-13974"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-45485"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3773"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4002"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43976"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-0941"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43389"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-44733"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4037"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-29154"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-37159"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3772"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-0404"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3669"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3764"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-20322"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43056"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-41864"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3612"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-26401"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-27820"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3743"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1011"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-45486"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0322"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-4788"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0286"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0001"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3759"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-21781"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0002"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-42739"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3696"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38185"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21803"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28736"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3697"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28734"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28737"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3695"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28735"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29368"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj@kernel.org/t/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44733"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28711"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28715"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39685"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45402"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0382"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23222"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1055"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0264"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43975"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45095"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27666"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0742"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5368-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45480"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25645"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5201"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5392"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5483"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-34169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21540"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21540"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21541"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-34169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21541"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2403"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:5878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2403"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5879"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2380"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1011"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28388"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1199"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1198"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28389"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1516"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1204"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5541-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1353"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "PACKETSTORM",
"id": "166636"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167852"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "PACKETSTORM",
"id": "166636"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167852"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-03-23T00:00:00",
"db": "VULHUB",
"id": "VHN-410862"
},
{
"date": "2023-08-02T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"date": "2022-04-07T16:37:07",
"db": "PACKETSTORM",
"id": "166636"
},
{
"date": "2022-06-28T15:20:26",
"db": "PACKETSTORM",
"id": "167602"
},
{
"date": "2022-07-27T17:32:01",
"db": "PACKETSTORM",
"id": "167852"
},
{
"date": "2022-06-29T20:27:02",
"db": "PACKETSTORM",
"id": "167622"
},
{
"date": "2022-07-01T15:04:32",
"db": "PACKETSTORM",
"id": "167679"
},
{
"date": "2022-08-10T15:50:18",
"db": "PACKETSTORM",
"id": "168019"
},
{
"date": "2022-07-29T14:39:49",
"db": "PACKETSTORM",
"id": "167886"
},
{
"date": "2022-03-23T20:15:10.200000",
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-03T00:00:00",
"db": "VULHUB",
"id": "VHN-410862"
},
{
"date": "2023-08-02T06:47:00",
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"date": "2023-11-07T03:40:21.077000",
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "166636"
},
{
"db": "PACKETSTORM",
"id": "167886"
}
],
"trust": 0.2
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Linux\u00a0Kernel\u00a0 Authentication vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "arbitrary",
"sources": [
{
"db": "PACKETSTORM",
"id": "166636"
},
{
"db": "PACKETSTORM",
"id": "167886"
}
],
"trust": 0.2
}
}
VAR-202001-1866
Vulnerability from variot - Updated: 2024-07-23 19:54xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation. libxml2 is a function library written in C language for parsing XML documents. There is a security vulnerability in the xmlStringLenDecodeEntities of the parser.c file in libxml2 version 2.9.10. It exists that libxml2 incorrectly handled certain XML files. (CVE-2019-19956, CVE-2020-7595). In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
These updated images include numerous security fixes, bug fixes, and enhancements. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. Solution:
Download the release images via:
quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3
- Bugs fixed (https://bugzilla.redhat.com/):
1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display
- JIRA issues fixed (https://issues.jboss.org/):
PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version
- Bugs fixed (https://bugzilla.redhat.com/):
1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1836815 - Gather image registry config (backport to 4.3) 1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist 1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator 1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized 1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.7.0 is now available.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2020:5634
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64
The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x
The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le
The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.
Security Fix(es):
-
crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)
-
golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)
-
gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
-
nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)
-
kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)
-
containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)
-
heketi: gluster-block volume password details available in logs (CVE-2020-10763)
-
golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)
-
jwt-go: access restriction bypass vulnerability (CVE-2020-26160)
-
golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)
-
golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.
- Bugs fixed (https://bugzilla.redhat.com/):
1620608 - Restoring deployment config with history leads to weird state
1752220 - [OVN] Network Policy fails to work when project label gets overwritten
1756096 - Local storage operator should implement must-gather spec
1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs
1768255 - installer reports 100% complete but failing components
1770017 - Init containers restart when the exited container is removed from node.
1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating
1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset
1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale
1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands
1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions
1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved"
1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor
1801089 - [OVN] Installation failed and monitoring pod not created due to some network error.
1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image
1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration
1806000 - CRI-O failing with: error reserving ctr name
1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be
1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be
1810438 - Installation logs are not gathered from OCP nodes
1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist
1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation
1813012 - EtcdDiscoveryDomain no longer needed
1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints
1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use
1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
1819457 - Package Server is in 'Cannot update' status despite properly working
1820141 - [RFE] deploy qemu-quest-agent on the nodes
1822744 - OCS Installation CI test flaking
1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario
1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool
1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file
1829723 - User workload monitoring alerts fire out of the box
1832968 - oc adm catalog mirror does not mirror the index image itself
1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN
1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters
1834995 - olmFull suite always fails once th suite is run on the same cluster
1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz
1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4
1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks
1838751 - [oVirt][Tracker] Re-enable skipped network tests
1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups
1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed
1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP
1841119 - Get rid of config patches and pass flags directly to kcm
1841175 - When an Install Plan gets deleted, OLM does not create a new one
1841381 - Issue with memoryMB validation
1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option
1844727 - Etcd container leaves grep and lsof zombie processes
1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs
1847074 - Filter bar layout issues at some screen widths on search page
1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural
1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5
1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service
1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard
1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing
1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD
1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service
1853115 - the restriction of --cloud option should be shown in help text.
1853116 - --to option does not work with --credentials-requests flag.
1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view
1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash
1854567 - "Installed Operators" list showing "duplicated" entries during installation
1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present
1855351 - Inconsistent Installer reactions to Ctrl-C during user input process
1855408 - OVN cluster unstable after running minimal scale test
1856351 - Build page should show metrics for when the build ran, not the last 30 minutes
1856354 - New APIServices missing from OpenAPI definitions
1857446 - ARO/Azure: excessive pod memory allocation causes node lockup
1857877 - Operator upgrades can delete existing CSV before completion
1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed
1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created
1860136 - default ingress does not propagate annotations to route object on update
1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed"
1860518 - unable to stop a crio pod
1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller
1862430 - LSO: PV creation lock should not be acquired in a loop
1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group.
1862608 - Virtual media does not work on hosts using BIOS, only UEFI
1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network
1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff
1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt
1866043 - Configurable table column headers can be illegible
1866087 - Examining agones helm chart resources results in "Oh no!"
1866261 - Need to indicate the intentional behavior for Ansible in the create api help info
1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement
1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity
1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help
1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed
1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations
1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x
1866482 - Few errors are seen when oc adm must-gather is run
1866605 - No metadata.generation set for build and buildconfig objects
1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name
1866901 - Deployment strategy for BMO allows multiple pods to run at the same time
1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure.
1867165 - Cannot assign static address to baremetal install bootstrap vm
1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig
1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS
1867477 - HPA monitoring cpu utilization fails for deployments which have init containers
1867518 - [oc] oc should not print so many goroutines when ANY command fails
1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster
1867965 - OpenShift Console Deployment Edit overwrites deployment yaml
1868004 - opm index add appears to produce image with wrong registry server binary
1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table"
1868104 - Baremetal actuator should not delete Machine objects
1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead
1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters
1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node
1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running
1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation
1868765 - [vsphere][ci] could not reserve an IP address: no available addresses
1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster
1868976 - Prometheus error opening query log file on EBS backed PVC
1869293 - The configmap name looks confusing in aide-ds pod logs
1869606 - crio's failing to delete a network namespace
1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes
1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
1870373 - Ingress Operator reports available when DNS fails to provision
1870467 - D/DC Part of Helm / Operator Backed should not have HPA
1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json
1870800 - [4.6] Managed Column not appearing on Pods Details page
1871170 - e2e tests are needed to validate the functionality of the etcdctl container
1872001 - EtcdDiscoveryDomain no longer needed
1872095 - content are expanded to the whole line when only one column in table on Resource Details page
1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console
1872128 - Can't run container with hostPort on ipv6 cluster
1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective
1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity
1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them
1872821 - [DOC] Typo in Ansible Operator Tutorial
1872907 - Fail to create CR from generated Helm Base Operator
1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page)
1873007 - [downstream] failed to read config when running the operator-sdk in the home path
1873030 - Subscriptions without any candidate operators should cause resolution to fail
1873043 - Bump to latest available 1.19.x k8s
1873114 - Nodes goes into NotReady state (VMware)
1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem
1873305 - Failed to power on /inspect node when using Redfish protocol
1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information
1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation
1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working
1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters
1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\""
1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver
1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider
1874240 - [vsphere] unable to deprovision - Runtime error list attached objects
1874248 - Include validation for vcenter host in the install-config
1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6
1874583 - apiserver tries and fails to log an event when shutting down
1874584 - add retry for etcd errors in kube-apiserver
1874638 - Missing logging for nbctl daemon
1874736 - [downstream] no version info for the helm-operator
1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution
1874968 - Accessibility: The project selection drop down is a keyboard trap
1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users
1875516 - disabled scheduling is easy to miss in node page of OCP console
1875598 - machine status is Running for a master node which has been terminated from the console
1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes.
1876166 - need to be able to disable kube-apiserver connectivity checks
1876469 - Invalid doc link on yaml template schema description
1876701 - podCount specDescriptor change doesn't take effect on operand details page
1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt
1876935 - AWS volume snapshot is not deleted after the cluster is destroyed
1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted
1877105 - add redfish to enabled_bios_interfaces
1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted
1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown
1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices'
1877681 - Manually created PV can not be used
1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53
1877740 - RHCOS unable to get ip address during first boot
1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5
1877919 - panic in multus-admission-controller
1877924 - Cannot set BIOS config using Redfish with Dell iDracs
1878022 - Met imagestreamimport error when import the whole image repository
1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated
1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status
1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM
1878766 - CPU consumption on nodes is higher than the CPU count of the node.
1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus.
1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image"
1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode
1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used
1878953 - RBAC error shows when normal user access pvc upload page
1878956 - oc api-resources does not include API version
1878972 - oc adm release mirror removes the architecture information
1879013 - [RFE]Improve CD-ROM interface selection
1879056 - UI should allow to change or unset the evictionStrategy
1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled
1879094 - RHCOS dhcp kernel parameters not working as expected
1879099 - Extra reboot during 4.5 -> 4.6 upgrade
1879244 - Error adding container to network "ipvlan-host-local": "master" field is required
1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder
1879282 - Update OLM references to point to the OLM's new doc site
1879283 - panic after nil pointer dereference in pkg/daemon/update.go
1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests
1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’
1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted.
1879565 - IPv6 installation fails on node-valid-hostname
1879777 - Overlapping, divergent openshift-machine-api namespace manifests
1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy
1879930 - Annotations shouldn't be removed during object reconciliation
1879976 - No other channel visible from console
1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc.
1880148 - dns daemonset rolls out slowly in large clusters
1880161 - Actuator Update calls should have fixed retry time
1880259 - additional network + OVN network installation failed
1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed"
1880410 - Convert Pipeline Visualization node to SVG
1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn
1880443 - broken machine pool management on OpenStack
1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s.
1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation
1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)
1880785 - CredentialsRequest missing description in oc explain
1880787 - No description for Provisioning CRD for oc explain
1880902 - need dnsPlocy set in crd ingresscontrollers
1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster
1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use
1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets
1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node
1881268 - Image uploading failed but wizard claim the source is available
1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration
1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup
1881881 - unable to specify target port manually resulting in application not reachable
1881898 - misalignment of sub-title in quick start headers
1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster
1882057 - Not able to select access modes for snapshot and clone
1882140 - No description for spec.kubeletConfig
1882176 - Master recovery instructions don't handle IP change well
1882191 - Installation fails against external resources which lack DNS Subject Alternative Name
1882209 - [ BateMetal IPI ] local coredns resolution not working
1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version"
1882268 - [e2e][automation]Add Integration Test for Snapshots
1882361 - Retrieve and expose the latest report for the cluster
1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use
1882556 - git:// protocol in origin tests is not currently proxied
1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4
1882608 - Spot instance not getting created on AzureGovCloud
1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance
1882649 - IPI installer labels all images it uploads into glance as qcow2
1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic
1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page
1882660 - Operators in a namespace should be installed together when approve one
1882667 - [ovn] br-ex Link not found when scale up RHEL worker
1882723 - [vsphere]Suggested mimimum value for providerspec not working
1882730 - z systems not reporting correct core count in recording rule
1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully
1882781 - nameserver= option to dracut creates extra NM connection profile
1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined
1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere
1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability
1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status
1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace
1883425 - Gather top installplans and their count
1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2
1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]
1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error
1883560 - operator-registry image needs clean up in /tmp
1883563 - Creating duplicate namespace from create namespace modal breaks the UI
1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful"
1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate
1883660 - e2e-metal-ipi CI job consistently failing on 4.4
1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests
1883766 - [e2e][automation] Adjust tests for UI changes
1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations
1883773 - opm alpha bundle build fails on win10 home
1883790 - revert "force cert rotation every couple days for development" in 4.7
1883803 - node pull secret feature is not working as expected
1883836 - Jenkins imagestream ubi8 and nodejs12 update
1883847 - The UI does not show checkbox for enable encryption at rest for OCS
1883853 - go list -m all does not work
1883905 - race condition in opm index add --overwrite-latest
1883946 - Understand why trident CSI pods are getting deleted by OCP
1884035 - Pods are illegally transitioning back to pending
1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace
1884131 - oauth-proxy repository should run tests
1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied
1884221 - IO becomes unhealthy due to a file change
1884258 - Node network alerts should work on ratio rather than absolute values
1884270 - Git clone does not support SCP-style ssh locations
1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout
1884435 - vsphere - loopback is randomly not being added to resolver
1884565 - oauth-proxy crashes on invalid usage
1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy
1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users
1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment
1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu.
1884632 - Adding BYOK disk encryption through DES
1884654 - Utilization of a VMI is not populated
1884655 - KeyError on self._existing_vifs[port_id]
1884664 - Operator install page shows "installing..." instead of going to install status page
1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac'
1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure
1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps
1884739 - Node process segfaulted
1884824 - Update baremetal-operator libraries to k8s 1.19
1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping
1885138 - Wrong detection of pending state in VM details
1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2
1885165 - NoRunningOvnMaster alert falsely triggered
1885170 - Nil pointer when verifying images
1885173 - [e2e][automation] Add test for next run configuration feature
1885179 - oc image append fails on push (uploading a new layer)
1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig
1885218 - [e2e][automation] Add virtctl to gating script
1885223 - Sync with upstream (fix panicking cluster-capacity binary)
1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2
1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2
1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2
1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2
1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2
1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2
1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI
1885315 - unit tests fail on slow disks
1885319 - Remove redundant use of group and kind of DataVolumeTemplate
1885343 - Console doesn't load in iOS Safari when using self-signed certificates
1885344 - 4.7 upgrade - dummy bug for 1880591
1885358 - add p&f configuration to protect openshift traffic
1885365 - MCO does not respect the install section of systemd files when enabling
1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating
1885398 - CSV with only Webhook conversion can't be installed
1885403 - Some OLM events hide the underlying errors
1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case
1885425 - opm index add cannot batch add multiple bundles that use skips
1885543 - node tuning operator builds and installs an unsigned RPM
1885644 - Panic output due to timeouts in openshift-apiserver
1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment
1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations
1885706 - Cypress: Fix 'link-name' accesibility violation
1885761 - DNS fails to resolve in some pods
1885856 - Missing registry v1 protocol usage metric on telemetry
1885864 - Stalld service crashed under the worker node
1885930 - [release 4.7] Collect ServiceAccount statistics
1885940 - kuryr/demo image ping not working
1886007 - upgrade test with service type load balancer will never work
1886022 - Move range allocations to CRD's
1886028 - [BM][IPI] Failed to delete node after scale down
1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas
1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd
1886154 - System roles are not present while trying to create new role binding through web console
1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm
1886168 - Remove Terminal Option for Windows Nodes
1886200 - greenwave / CVP is failing on bundle validations, cannot stage push
1886229 - Multipath support for RHCOS sysroot
1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage
1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status
1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL
1886397 - Move object-enum to console-shared
1886423 - New Affinities don't contain ID until saving
1886435 - Azure UPI uses deprecated command 'group deployment'
1886449 - p&f: add configuration to protect oauth server traffic
1886452 - layout options doesn't gets selected style on click i.e grey background
1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected
1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest
1886524 - Change default terminal command for Windows Pods
1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution
1886600 - panic: assignment to entry in nil map
1886620 - Application behind service load balancer with PDB is not disrupted
1886627 - Kube-apiserver pods restarting/reinitializing periodically
1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider
1886636 - Panic in machine-config-operator
1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer.
1886751 - Gather MachineConfigPools
1886766 - PVC dropdown has 'Persistent Volume' Label
1886834 - ovn-cert is mandatory in both master and node daemonsets
1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState
1886861 - ordered-values.yaml not honored if values.schema.json provided
1886871 - Neutron ports created for hostNetworking pods
1886890 - Overwrite jenkins-agent-base imagestream
1886900 - Cluster-version operator fills logs with "Manifest: ..." spew
1886922 - [sig-network] pods should successfully create sandboxes by getting pod
1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console
1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO
1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded
1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster
1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6
1887046 - Event for LSO need update to avoid confusion
1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image
1887375 - User should be able to specify volumeMode when creating pvc from web-console
1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console
1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval
1887428 - oauth-apiserver service should be monitored by prometheus
1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False"
1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes
1887465 - Deleted project is still referenced
1887472 - unable to edit application group for KSVC via gestures (shift+Drag)
1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface
1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster
1887525 - Failures to set master HardwareDetails cannot easily be debugged
1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable
1887585 - ovn-masters stuck in crashloop after scale test
1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade.
1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator
1887740 - cannot install descheduler operator after uninstalling it
1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events
1887750 - oc explain localvolumediscovery returns empty description
1887751 - oc explain localvolumediscoveryresult returns empty description
1887778 - Add ContainerRuntimeConfig gatherer
1887783 - PVC upload cannot continue after approve the certificate
1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard
1887799 - User workload monitoring prometheus-config-reloader OOM
1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky
1887863 - Installer panics on invalid flavor
1887864 - Clean up dependencies to avoid invalid scan flagging
1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison
1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig
1888015 - workaround kubelet graceful termination of static pods bug
1888028 - prevent extra cycle in aggregated apiservers
1888036 - Operator details shows old CRD versions
1888041 - non-terminating pods are going from running to pending
1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect
1888073 - Operator controller continuously busy looping
1888118 - Memory requests not specified for image registry operator
1888150 - Install Operand Form on OperatorHub is displaying unformatted text
1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced
1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build
1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5
1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt
1888363 - namespaces crash in dev
1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created
1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected
1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC
1888494 - imagepruner pod is error when image registry storage is not configured
1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree"
1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error
1888601 - The poddisruptionbudgets is using the operator service account, instead of gather
1888657 - oc doesn't know its name
1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable
1888671 - Document the Cloud Provider's ignore-volume-az setting
1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image
1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName()
1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set
1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster
1888866 - AggregatedAPIDown permanently firing after removing APIService
1888870 - JS error when using autocomplete in YAML editor
1888874 - hover message are not shown for some properties
1888900 - align plugins versions
1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation
1889213 - The error message of uploading failure is not clear enough
1889267 - Increase the time out for creating template and upload image in the terraform
1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)
1889374 - Kiali feature won't work on fresh 4.6 cluster
1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode
1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade
1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information
1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance
1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown
1889577 - Resources are not shown on project workloads page
1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment
1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages
1889692 - Selected Capacity is showing wrong size
1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15
1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off
1889710 - Prometheus metrics on disk take more space compared to OCP 4.5
1889721 - opm index add semver-skippatch mode does not respect prerelease versions
1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab
1889767 - [vsphere] Remove certificate from upi-installer image
1889779 - error when destroying a vSphere installation that failed early
1889787 - OCP is flooding the oVirt engine with auth errors
1889838 - race in Operator update after fix from bz1888073
1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1
1889863 - Router prints incorrect log message for namespace label selector
1889891 - Backport timecache LRU fix
1889912 - Drains can cause high CPU usage
1889921 - Reported Degraded=False Available=False pair does not make sense
1889928 - [e2e][automation] Add more tests for golden os
1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName
1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings
1890074 - MCO extension kernel-headers is invalid
1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest
1890130 - multitenant mode consistently fails CI
1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e
1890145 - The mismatched of font size for Status Ready and Health Check secondary text
1890180 - FieldDependency x-descriptor doesn't support non-sibling fields
1890182 - DaemonSet with existing owner garbage collected
1890228 - AWS: destroy stuck on route53 hosted zone not found
1890235 - e2e: update Protractor's checkErrors logging
1890250 - workers may fail to join the cluster during an update from 4.5
1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member
1890270 - External IP doesn't work if the IP address is not assigned to a node
1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability
1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere
1890467 - unable to edit an application without a service
1890472 - [Kuryr] Bulk port creation exception not completely formatted
1890494 - Error assigning Egress IP on GCP
1890530 - cluster-policy-controller doesn't gracefully terminate
1890630 - [Kuryr] Available port count not correctly calculated for alerts
1890671 - [SA] verify-image-signature using service account does not work
1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest
1890808 - New etcd alerts need to be added to the monitoring stack
1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha.
1890984 - Rename operator-webhook-config to sriov-operator-webhook-config
1890995 - wew-app should provide more insight into why image deployment failed
1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call
1891047 - Helm chart fails to install using developer console because of TLS certificate error
1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler
1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI
1891108 - p&f: Increase the concurrency share of workload-low priority level
1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)
1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown
1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart)
1891362 - Wrong metrics count for openshift_build_result_total
1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message
1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message
1891376 - Extra text in Cluster Utilization charts
1891419 - Wrong detail head on network policy detail page.
1891459 - Snapshot tests should report stderr of failed commands
1891498 - Other machine config pools do not show during update
1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage
1891551 - Clusterautoscaler doesn't scale up as expected
1891552 - Handle missing labels as empty.
1891555 - The windows oc.exe binary does not have version metadata
1891559 - kuryr-cni cannot start new thread
1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11
1891625 - [Release 4.7] Mutable LoadBalancer Scope
1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml
1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails
1891740 - OperatorStatusChanged is noisy
1891758 - the authentication operator may spam DeploymentUpdated event endlessly
1891759 - Dockerfile builds cannot change /etc/pki/ca-trust
1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1
1891825 - Error message not very informative in case of mode mismatch
1891898 - The ClusterServiceVersion can define Webhooks that cannot be created.
1891951 - UI should show warning while creating pools with compression on
1891952 - [Release 4.7] Apps Domain Enhancement
1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace
1891995 - OperatorHub displaying old content
1891999 - Storage efficiency card showing wrong compression ratio
1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm)
1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector.
1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator'
1892288 - assisted install workflow creates excessive control-plane disruption
1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config
1892358 - [e2e][automation] update feature gate for kubevirt-gating job
1892376 - Deleted netnamespace could not be re-created
1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky
1892393 - TestListPackages is flaky
1892448 - MCDPivotError alert/metric missing
1892457 - NTO-shipped stalld needs to use FIFO for boosting.
1892467 - linuxptp-daemon crash
1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env
1892653 - User is unable to create KafkaSource with v1beta
1892724 - VFS added to the list of devices of the nodeptpdevice CRD
1892799 - Mounting additionalTrustBundle in the operator
1893117 - Maintenance mode on vSphere blocks installation.
1893351 - TLS secrets are not able to edit on console.
1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots
1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability
1893546 - Deploy using virtual media fails on node cleaning step
1893601 - overview filesystem utilization of OCP is showing the wrong values
1893645 - oc describe route SIGSEGV
1893648 - Ironic image building process is not compatible with UEFI secure boot
1893724 - OperatorHub generates incorrect RBAC
1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted
1893776 - No useful metrics for image pull time available, making debugging issues there impossible
1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator
1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD
1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS
1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped
1893944 - Wrong product name for Multicloud Object Gateway
1893953 - (release-4.7) Gather default StatefulSet configs
1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating"
1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser
1893972 - Should skip e2e test cases as early as possible
1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://'
1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective
1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set
1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used.
1894065 - tag new packages to enable TLS support
1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0
1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries
1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM
1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted
1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)
1894216 - Improve OpenShift Web Console availability
1894275 - Fix CRO owners file to reflect node owner
1894278 - "database is locked" error when adding bundle to index image
1894330 - upgrade channels needs to be updated for 4.7
1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient"
1894374 - Dont prevent the user from uploading a file with incorrect extension
1894432 - [oVirt] sometimes installer timeout on tmp_import_vm
1894477 - bash syntax error in nodeip-configuration.service
1894503 - add automated test for Polarion CNV-5045
1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform
1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets
1894645 - Cinder volume provisioning crashes on nil cloud provider
1894677 - image-pruner job is panicking: klog stack
1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0
1894860 - 'backend' CI job passing despite failing tests
1894910 - Update the node to use the real-time kernel fails
1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package
1895065 - Schema / Samples / Snippets Tabs are all selected at the same time
1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI
1895141 - panic in service-ca injector
1895147 - Remove memory limits on openshift-dns
1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation
1895268 - The bundleAPIs should NOT be empty
1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster
1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release"
1895360 - Machine Config Daemon removes a file although its defined in the dropin
1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1
1895372 - Web console going blank after selecting any operator to install from OperatorHub
1895385 - Revert KUBELET_LOG_LEVEL back to level 3
1895423 - unable to edit an application with a custom builder image
1895430 - unable to edit custom template application
1895509 - Backup taken on one master cannot be restored on other masters
1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image
1895838 - oc explain description contains '/'
1895908 - "virtio" option is not available when modifying a CD-ROM to disk type
1895909 - e2e-metal-ipi-ovn-dualstack is failing
1895919 - NTO fails to load kernel modules
1895959 - configuring webhook token authentication should prevent cluster upgrades
1895979 - Unable to get coreos-installer with --copy-network to work
1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV
1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)
1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed
1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest
1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded
1896244 - Found a panic in storage e2e test
1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general
1896302 - [e2e][automation] Fix 4.6 test failures
1896365 - [Migration]The SDN migration cannot revert under some conditions
1896384 - [ovirt IPI]: local coredns resolution not working
1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6
1896529 - Incorrect instructions in the Serverless operator and application quick starts
1896645 - documentationBaseURL needs to be updated for 4.7
1896697 - [Descheduler] policy.yaml param in cluster configmap is empty
1896704 - Machine API components should honour cluster wide proxy settings
1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters
1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator
1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails
1896918 - start creating new-style Secrets for AWS
1896923 - DNS pod /metrics exposed on anonymous http port
1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1897003 - VNC console cannot be connected after visit it in new window
1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals
1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO
1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored
1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV.
1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces
1897138 - oVirt provider uses depricated cluster-api project
1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly
1897252 - Firing alerts are not showing up in console UI after cluster is up for some time
1897354 - Operator installation showing success, but Provided APIs are missing
1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused"
1897412 - [sriov]disableDrain did not be updated in CRD of manifest
1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page
1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost'
1897520 - After restarting nodes the image-registry co is in degraded true state.
1897584 - Add casc plugins
1897603 - Cinder volume attachment detection failure in Kubelet
1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized"
1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers
1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests
1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition
1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing
1897897 - ptp lose sync openshift 4.6
1898036 - no network after reboot (IPI)
1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically
1898097 - mDNS floods the baremetal network
1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem
1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied
1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster
1898174 - [OVN] EgressIP does not guard against node IP assignment
1898194 - GCP: can't install on custom machine types
1898238 - Installer validations allow same floating IP for API and Ingress
1898268 - [OVN]:make checkbroken on 4.6
1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default
1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover
1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display.
1898407 - [Deployment timing regression] Deployment takes longer with 4.7
1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service
1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine
1898500 - Failure to upgrade operator when a Service is included in a Bundle
1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic
1898532 - Display names defined in specDescriptors not respected
1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted
1898613 - Whereabouts should exclude IPv6 ranges
1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase
1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk
1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability
1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator
1898839 - Wrong YAML in operator metadata
1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job
1898873 - Remove TechPreview Badge from Monitoring
1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way
1899111 - [RFE] Update jenkins-maven-agen to maven36
1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist
1899175 - bump the RHCOS boot images for 4.7
1899198 - Use new packages for ipa ramdisks
1899200 - In Installed Operators page I cannot search for an Operator by it's name
1899220 - Support AWS IMDSv2
1899350 - configure-ovs.sh doesn't configure bonding options
1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found"
1899459 - Failed to start monitoring pods once the operator removed from override list of CVO
1899515 - Passthrough credentials are not immediately re-distributed on update
1899575 - update discovery burst to reflect lots of CRDs on openshift clusters
1899582 - update discovery burst to reflect lots of CRDs on openshift clusters
1899588 - Operator objects are re-created after all other associated resources have been deleted
1899600 - Increased etcd fsync latency as of OCP 4.6
1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup
1899627 - Project dashboard Active status using small icon
1899725 - Pods table does not wrap well with quick start sidebar open
1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)
1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality
1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0"
1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap
1899853 - additionalSecurityGroupIDs not working for master nodes
1899922 - NP changes sometimes influence new pods.
1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet
1900008 - Fix internationalized sentence fragments in ImageSearch.tsx
1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx
1900020 - Remove ' from internationalized keys
1900022 - Search Page - Top labels field is not applied to selected Pipeline resources
1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently
1900126 - Creating a VM results in suggestion to create a default storage class when one already exists
1900138 - [OCP on RHV] Remove insecure mode from the installer
1900196 - stalld is not restarted after crash
1900239 - Skip "subPath should be able to unmount" NFS test
1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists
1900377 - [e2e][automation] create new css selector for active users
1900496 - (release-4.7) Collect spec config for clusteroperator resources
1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks
1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue
1900759 - include qemu-guest-agent by default
1900790 - Track all resource counts via telemetry
1900835 - Multus errors when cachefile is not found
1900935 -oc adm release mirrorpanic panic: runtime error
1900989 - accessing the route cannot wake up the idled resources
1901040 - When scaling down the status of the node is stuck on deleting
1901057 - authentication operator health check failed when installing a cluster behind proxy
1901107 - pod donut shows incorrect information
1901111 - Installer dependencies are broken
1901200 - linuxptp-daemon crash when enable debug log level
1901301 - CBO should handle platform=BM without provisioning CR
1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly
1901363 - High Podready Latency due to timed out waiting for annotations
1901373 - redundant bracket on snapshot restore button
1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true"
1901395 - "Edit virtual machine template" action link should be removed
1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting
1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP
1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema
1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance"
1901604 - CNO blocks editing Kuryr options
1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled
1901909 - The device plugin pods / cni pod are restarted every 5 minutes
1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service
1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error
1902059 - Wire a real signer for service accout issuer
1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage
1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service
1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod
1902253 - MHC status doesnt set RemediationsAllowed = 0
1902299 - Failed to mirror operator catalog - error: destination registry required
1902545 - Cinder csi driver node pod should add nodeSelector for Linux
1902546 - Cinder csi driver node pod doesn't run on master node
1902547 - Cinder csi driver controller pod doesn't run on master node
1902552 - Cinder csi driver does not use the downstream images
1902595 - Project workloads list view doesn't show alert icon and hover message
1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent
1902601 - Cinder csi driver pods run as BestEffort qosClass
1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group
1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails
1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked
1902824 - failed to generate semver informed package manifest: unable to determine default channel
1902894 - hybrid-overlay-node crashing trying to get node object during initialization
1902969 - Cannot load vmi detail page
1902981 - It should default to current namespace when create vm from template
1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI
1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry
1903034 - OLM continuously printing debug logs
1903062 - [Cinder csi driver] Deployment mounted volume have no write access
1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready
1903107 - Enable vsphere-problem-detector e2e tests
1903164 - OpenShift YAML editor jumps to top every few seconds
1903165 - Improve Canary Status Condition handling for e2e tests
1903172 - Column Management: Fix sticky footer on scroll
1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled
1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format:
1903192 - Role name missing on create role binding form
1903196 - Popover positioning is misaligned for Overview Dashboard status items
1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends.
1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components
1903248 - Backport Upstream Static Pod UID patch
1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]
1903290 - Kubelet repeatedly log the same log line from exited containers
1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption.
1903382 - Panic when task-graph is canceled with a TaskNode with no tasks
1903400 - Migrate a VM which is not running goes to pending state
1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page
1903414 - NodePort is not working when configuring an egress IP address
1903424 - mapi_machine_phase_transition_seconds_sum doesn't work
1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum"
1903639 - Hostsubnet gatherer produces wrong output
1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service
1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started
1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image
1903717 - Handle different Pod selectors for metal3 Deployment
1903733 - Scale up followed by scale down can delete all running workers
1903917 - Failed to load "Developer Catalog" page
1903999 - Httplog response code is always zero
1904026 - The quota controllers should resync on new resources and make progress
1904064 - Automated cleaning is disabled by default
1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases
1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap
1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails
1904133 - KubeletConfig flooded with failure conditions
1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart
1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !
1904244 - MissingKey errors for two plugins using i18next.t
1904262 - clusterresourceoverride-operator has version: 1.0.0 every build
1904296 - VPA-operator has version: 1.0.0 every build
1904297 - The index image generated by "opm index prune" leaves unrelated images
1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards
1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade
1904497 - vsphere-problem-detector: Run on vSphere cloud only
1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set
1904502 - vsphere-problem-detector: allow longer timeouts for some operations
1904503 - vsphere-problem-detector: emit alerts
1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)
1904578 - metric scraping for vsphere problem detector is not configured
1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade
1904663 - IPI pointer customization MachineConfig always generated
1904679 - [Feature:ImageInfo] Image info should display information about images
1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image
1904684 - [sig-cli] oc debug ensure it works with image streams
1904713 - Helm charts with kubeVersion restriction are filtered incorrectly
1904776 - Snapshot modal alert is not pluralized
1904824 - Set vSphere hostname from guestinfo before NM starts
1904941 - Insights status is always showing a loading icon
1904973 - KeyError: 'nodeName' on NP deletion
1904985 - Prometheus and thanos sidecar targets are down
1904993 - Many ampersand special characters are found in strings
1905066 - QE - Monitoring test cases - smoke test suite automation
1905074 - QE -Gherkin linter to maintain standards
1905100 - Too many haproxy processes in default-router pod causing high load average
1905104 - Snapshot modal disk items missing keys
1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm
1905119 - Race in AWS EBS determining whether custom CA bundle is used
1905128 - [e2e][automation] e2e tests succeed without actually execute
1905133 - operator conditions special-resource-operator
1905141 - vsphere-problem-detector: report metrics through telemetry
1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures
1905194 - Detecting broken connections to the Kube API takes up to 15 minutes
1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests
1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP
1905253 - Inaccurate text at bottom of Events page
1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory
1905299 - OLM fails to update operator
1905307 - Provisioning CR is missing from must-gather
1905319 - cluster-samples-operator containers are not requesting required memory resource
1905320 - csi-snapshot-webhook is not requesting required memory resource
1905323 - dns-operator is not requesting required memory resource
1905324 - ingress-operator is not requesting required memory resource
1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory
1905328 - Changing the bound token service account issuer invalids previously issued bound tokens
1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory
1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory
1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails
1905347 - QE - Design Gherkin Scenarios
1905348 - QE - Design Gherkin Scenarios
1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod
1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted
1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input
1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation
1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1
1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work
1905416 - Hyperlink not working from Operator Description
1905430 - usbguard extension fails to install because of missing correct protobuf dependency version
1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads
1905502 - Test flake - unable to get https transport for ephemeral-registry
1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6.
1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs
1905610 - Fix typo in export script
1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster
1905640 - Subscription manual approval test is flaky
1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry
1905696 - ClusterMoreUpdatesModal component did not get internationalized
1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes
1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project
1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster
1905792 - [OVN]Cannot create egressfirewalll with dnsName
1905889 - Should create SA for each namespace that the operator scoped
1905920 - Quickstart exit and restart
1905941 - Page goes to error after create catalogsource
1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711
1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters
1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected
1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it
1906118 - OCS feature detection constantly polls storageclusters and storageclasses
1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource
1906121 - [oc] After new-project creation, the kubeconfig file does not set the project
1906134 - OLM should not create OperatorConditions for copied CSVs
1906143 - CBO supports log levels
1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config
1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots
1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize.
1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*'
1906318 - use proper term for Authorized SSH Keys
1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional
1906356 - Unify Clone PVC boot source flow with URL/Container boot source
1906397 - IPA has incorrect kernel command line arguments
1906441 - HorizontalNav and NavBar have invalid keys
1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log
1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project
1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them
1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures
1906511 - Root reprovisioning tests flaking often in CI
1906517 - Validation is not robust enough and may prevent to generate install-confing.
1906518 - Update snapshot API CRDs to v1
1906519 - Update LSO CRDs to use v1
1906570 - Number of disruptions caused by reboots on a cluster cannot be measured
1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope
1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs
1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs
1906679 - quick start panel styles are not loaded
1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber
1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form
1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created
1906689 - user can pin to nav configmaps and secrets multiple times
1906691 - Add doc which describes disabling helm chart repository
1906713 - Quick starts not accesible for a developer user
1906718 - helm chart "provided by Redhat" is misspelled
1906732 - Machine API proxy support should be tested
1906745 - Update Helm endpoints to use Helm 3.4.x
1906760 - performance issues with topology constantly re-rendering
1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring
1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section
1906769 - topology fails to load with non-kubeadmin user
1906770 - shortcuts on mobiles view occupies a lot of space
1906798 - Dev catalog customization doesn't update console-config ConfigMap
1906806 - Allow installing extra packages in ironic container images
1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer
1906835 - Topology view shows add page before then showing full project workloads
1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version
1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy
1906860 - Bump kube dependencies to v1.20 for Net Edge components
1906864 - Quick Starts Tour: Need to adjust vertical spacing
1906866 - Translations of Sample-Utils
1906871 - White screen when sort by name in monitoring alerts page
1906872 - Pipeline Tech Preview Badge Alignment
1906875 - Provide an option to force backup even when API is not available.
1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities
1906879 - Add missing i18n keys
1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install
1906896 - No Alerts causes odd empty Table (Need no content message)
1906898 - Missing User RoleBindings in the Project Access Web UI
1906899 - Quick Start - Highlight Bounding Box Issue
1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1
1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers
1906935 - Delete resources when Provisioning CR is deleted
1906968 - Must-gather should support collecting kubernetes-nmstate resources
1906986 - Ensure failed pod adds are retried even if the pod object doesn't change
1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt
1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change
1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible.
1907269 - Tooltips data are different when checking stack or not checking stack for the same time
1907280 - Install tour of OCS not available.
1907282 - Topology page breaks with white screen
1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance
1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent
1907293 - Increase timeouts in e2e tests
1907295 - Gherkin script for improve management for helm
1907299 - Advanced Subscription Badge for KMS and Arbiter not present
1907303 - Align VM template list items by baseline
1907304 - Use PF styles for selected template card in VM Wizard
1907305 - Drop 'ISO' from CDROM boot source message
1907307 - Support and provider labels should be passed on between templates and sources
1907310 - Pin action should be renamed to favorite
1907312 - VM Template source popover is missing info about added date
1907313 - ClusterOperator objects cannot be overriden with cvo-overrides
1907328 - iproute-tc package is missing in ovn-kube image
1907329 - CLUSTER_PROFILE env. variable is not used by the CVO
1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached"
1907373 - Rebase to kube 1.20.0
1907375 - Bump to latest available 1.20.x k8s - workloads team
1907378 - Gather netnamespaces networking info
1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity
1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one
1907390 - prometheus-adapter: panic after k8s 1.20 bump
1907399 - build log icon link on topology nodes cause app to reload
1907407 - Buildah version not accessible
1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer"
1907453 - Dev Perspective -> running vm details -> resources -> no data
1907454 - Install PodConnectivityCheck CRD with CNO
1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources
1907475 - Unable to estimate the error rate of ingress across the connected fleet
1907480 -Active alertssection throwing forbidden error for users.
1907518 - Kamelets/Eventsource should be shown to user if they have create access
1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US
1907610 - Update kubernetes deps to 1.20
1907612 - Update kubernetes deps to 1.20
1907621 - openshift/installer: bump cluster-api-provider-kubevirt version
1907628 - Installer does not set primary subnet consistently
1907632 - Operator Registry should update its kubernetes dependencies to 1.20
1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters
1907644 - fix up handling of non-critical annotations on daemonsets/deployments
1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)
1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication
1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail
1907767 - [e2e][automation]update test suite for kubevirt plugin
1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot
1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade
1907793 - Surface support info in VM template details
1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage
1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set
1907863 - Quickstarts status not updating when starting the tour
1907872 - dual stack with an ipv6 network fails on bootstrap phase
1907874 - QE - Design Gherkin Scenarios for epic ODC-5057
1907875 - No response when try to expand pvc with an invalid size
1907876 - Refactoring record package to make gatherer configurable
1907877 - QE - Automation- pipelines builder scripts
1907883 - Fix Pipleine creation without namespace issue
1907888 - Fix pipeline list page loader
1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form
1907892 - Unable to edit application deployed using "From Devfile" option
1907893 - navSortUtils.spec.ts unit test failure
1907896 - When a workload is added, Topology does not place the new items well
1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template
1907924 - Enable madvdontneed in OpenShift Images
1907929 - Enable madvdontneed in OpenShift System Components Part 2
1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot
1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context
1907948 - OCM-O bump to k8s 1.20
1907952 - bump to k8s 1.20
1907972 - Update OCM link to open Insights tab
1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI
1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916
1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni
1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk
1908035 - dynamic-demo-plugin build does not generate dist directory
1908135 - quick search modal is not centered over topology
1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled
1908159 - [AWS C2S] MCO fails to sync cloud config
1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)
1908180 - Add source for template is stucking in preparing pvc
1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens
1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN
1908277 - QE - Automation- pipelines actions scripts
1908280 - Documentation describingignore-volume-azis incorrect
1908296 - Fix pipeline builder form yaml switcher validation issue
1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI
1908323 - Create button missing for PLR in the search page
1908342 - The new pv_collector_total_pv_count is not reported via telemetry
1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name
1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots
1908349 - Volume snapshot tests are failing after 1.20 rebase
1908353 - QE - Automation- pipelines runs scripts
1908361 - bump to k8s 1.20
1908367 - QE - Automation- pipelines triggers scripts
1908370 - QE - Automation- pipelines secrets scripts
1908375 - QE - Automation- pipelines workspaces scripts
1908381 - Go Dependency Fixes for Devfile Lib
1908389 - Loadbalancer Sync failing on Azure
1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived
1908407 - Backport Upstream 95269 to fix potential crash in kubelet
1908410 - Exclude Yarn from VSCode search
1908425 - Create Role Binding form subject type and name are undefined when All Project is selected
1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods
1908434 - Remove &apos from metal3-plugin internationalized strings
1908437 - Operator backed with no icon has no badge associated with the CSV tag
1908459 - bump to k8s 1.20
1908461 - Add bugzilla component to OWNERS file
1908462 - RHCOS 4.6 ostree removed dhclient
1908466 - CAPO AZ Screening/Validating
1908467 - Zoom in and zoom out in topology package should be sentence case
1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size
1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster
1908471 - OLM should bump k8s dependencies to 1.20
1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests
1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM
1908545 - VM clone dialog does not open
1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard
1908562 - Pod readiness is not being observed in real world cases
1908565 - [4.6] Cannot filter the platform/arch of the index image
1908573 - Align the style of flavor
1908583 - bootstrap does not run on additional networks if configured for master in install-config
1908596 - Race condition on operator installation
1908598 - Persistent Dashboard shows events for all provisioners
1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state
1908648 - Skip TestKernelType test on OKD, adjust TestExtensions
1908650 - The title of customize wizard is inconsistent
1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator
1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]
1908687 - Option to save user settings separate when using local bridge (affects console developers only)
1908697 - Showkubectl diff command in the oc diff help page
1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom
1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds
1908717 - "missing unit character in duration" error in some network dashboards
1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload
1908747 - stale S3 CredentialsRequest in CCO manifest
1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase
1908830 - RHCOS 4.6 - Missing Initiatorname
1908868 - Update empty state message for EventSources and Channels tab
1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference
1908888 - Dualstack does not work with multiple gateways
1908889 - Bump CNO to k8s 1.20
1908891 - TestDNSForwarding DNS operator e2e test is failing frequently
1908914 - CNO: upgrade nodes before masters
1908918 - Pipeline builder yaml view sidebar is not responsive
1908960 - QE - Design Gherkin Scenarios
1908971 - Gherkin Script for pipeline debt 4.7
1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated
1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console
1908998 - [cinder-csi-driver] doesn't detect the credentials change
1909004 - "No datapoints found" for RHEL node's filesystem graph
1909005 - i18n: workloads list view heading is not translated
1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects
1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type
1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware
1909067 - Web terminal should keep latest output when connection closes
1909070 - PLR and TR Logs component is not streaming as fast as tkn
1909092 - Error Message should not confuse user on Channel form
1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page
1909108 - Machine API components should use 1.20 dependencies
1909116 - Catalog Sort Items dropdown is not aligned on Firefox
1909198 - Move Sink action option is not working
1909207 - Accessibility Issue on monitoring page
1909236 - Remove pinned icon overlap on resource name
1909249 - Intermittent packet drop from pod to pod
1909276 - Accessibility Issue on create project modal
1909289 - oc debug of an init container no longer works
1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2
1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle
1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it
1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O
1909464 - Build operator-registry with golang-1.15
1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found
1909521 - Add kubevirt cluster type for e2e-test workflow
1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created
1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node
1909610 - Fix available capacity when no storage class selected
1909678 - scale up / down buttons available on pod details side panel
1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART
1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined
1909739 - Arbiter request data changes
1909744 - cluster-api-provider-openstack: Bump gophercloud
1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline
1909791 - Update standalone kube-proxy config for EndpointSlice
1909792 - Empty states for some details page subcomponents are not i18ned
1909815 - Perspective switcher is only half-i18ned
1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body
1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI
1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing
1909911 - [OVN]EgressFirewall caused a segfault
1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument
1909958 - Support Quick Start Highlights Properly
1909978 - ignore-volume-az = yes not working on standard storageClass
1909981 - Improve statement in template select step
1909992 - Fail to pull the bundle image when using the private index image
1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev
1910036 - QE - Design Gherkin Scenarios ODC-4504
1910049 - UPI: ansible-galaxy is not supported
1910127 - [UPI on oVirt]: Improve UPI Documentation
1910140 - fix the api dashboard with changes in upstream kube 1.20
1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable
1910165 - DHCP to static lease script doesn't handle multiple addresses
1910305 - [Descheduler] - The minKubeVersion should be 1.20.0
1910409 - Notification drawer is not localized for i18n
1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials
1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation
1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page
1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work
1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready
1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability
1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded
1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected"
1910753 - Support Directory Path to Devfile
1910805 - Missing translation for Pipeline status and breadcrumb text
1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer
1910840 - Show Nonexistent command info in theoc rollback -hhelp page
1910859 - breadcrumbs doesn't use last namespace
1910866 - Unify templates string
1910870 - Unify template dropdown action
1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6
1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads"
1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard
1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration"
1911213 - Wrong and misleading warning for VMs that were created manually (not from template)
1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created
1911269 - waiting for the build message present when build exists
1911280 - Builder images are not detected for Dotnet, Httpd, NGINX
1911307 - Pod Scale-up requires extra privileges in OpenShift web-console
1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template
1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error
1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template
1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation
1911418 - [v2v] The target storage class name is not displayed if default storage class is used
1911434 - git ops empty state page displays icon with watermark
1911443 - SSH Cretifiaction field should be validated
1911465 - IOPS display wrong unit
1911474 - Devfile Application Group Does Not Delete Cleanly (errors)
1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController
1911574 - Expose volume mode on Upload Data form
1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined
1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel
1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle''
1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state
1911782 - Descheduler should not evict pod used local storage by the PVC
1911796 - uploading flow being displayed before submitting the form
1912066 - The ansible type operator's manager container is not stable when managing the CR
1912077 - helm operator's default rbac forbidden
1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory'
1912237 - Rebase CSI sidecars for 4.7
1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page
1912409 - Fix flow schema deployment
1912434 - Update guided tour modal title
1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken
1912523 - Standalone pod status not updating in topology graph
1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion
1912558 - TaskRun list and detail screen doesn't show Pending status
1912563 - p&f: carry 97206: clean up executing request on panic
1912565 - OLM macOS local build broken by moby/term dependency
1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion
1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff
1912590 - publicImageRepository not being populated
1912640 - Go operator's controller pods is forbidden
1912701 - Handle dual-stack configuration for NIC IP
1912703 - multiple queries can't be plotted in the same graph under some conditons
1912730 - Operator backed: In-context should support visual connector if SBO is not installed
1912828 - Align High Performance VMs with High Performance in RHV-UI
1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates
1912852 - VM from wizard - available VM templates - "storage" field is "0 B"
1912888 - recycler template should be moved to KCM operator
1912907 - Helm chart repository index can contain unresolvable relative URL's
1912916 - Set external traffic policy to cluster for IBM platform
1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller
1912938 - Update confirmation modal for quick starts
1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment
1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment
1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver
1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912977 - rebase upstream static-provisioner
1913006 - Remove etcd v2 specific alerts with etcd_http* metrics
1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip
1913037 - update static-provisioner base image
1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state
1913085 - Regression OLM uses scoped client for CRD installation
1913096 - backport: cadvisor machine metrics are missing in k8s 1.19
1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually
1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root
1913196 - Guided Tour doesn't handle resizing of browser
1913209 - Support modal should be shown for community supported templates
1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort
1913249 - update info alert this template is not aditable
1913285 - VM list empty state should link to virtualization quick starts
1913289 - Rebase AWS EBS CSI driver for 4.7
1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled
1913297 - Remove restriction of taints for arbiter node
1913306 - unnecessary scroll bar is present on quick starts panel
1913325 - 1.20 rebase for openshift-apiserver
1913331 - Import from git: Fails to detect Java builder
1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used
1913343 - (release-4.7) Added changelog file for insights-operator
1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator
1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en."
1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads
1913420 - Time duration setting of resources is not being displayed
1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\"
1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase
1913560 - Normal user cannot load template on the new wizard
1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user
1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table
1913568 - Normal user cannot create template
1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker
1913585 - Topology descriptive text fixes
1913608 - Table data contains data value None after change time range in graph and change back
1913651 - Improved Red Hat image and crashlooping OpenShift pod collection
1913660 - Change location and text of Pipeline edit flow alert
1913685 - OS field not disabled when creating a VM from a template
1913716 - Include additional use of existing libraries
1913725 - Refactor Insights Operator Plugin states
1913736 - Regression: fails to deploy computes when using root volumes
1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes
1913751 - add third-party network plugin test suite to openshift-tests
1913783 - QE-To fix the merging pr issue, commenting the afterEach() block
1913807 - Template support badge should not be shown for community supported templates
1913821 - Need definitive steps about uninstalling descheduler operator
1913851 - Cluster Tasks are not sorted in pipeline builder
1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists
1913951 - Update the Devfile Sample Repo to an Official Repo Host
1913960 - Cluster Autoscaler should use 1.20 dependencies
1913969 - Field dependency descriptor can sometimes cause an exception
1914060 - Disk created from 'Import via Registry' cannot be used as boot disk
1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy
1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)
1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances
1914125 - Still using /dev/vde as default device path when create localvolume
1914183 - Empty NAD page is missing link to quickstarts
1914196 - target port infrom dockerfileflow does nothing
1914204 - Creating VM from dev perspective may fail with template not found error
1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets
1914212 - [e2e][automation] Add test to validate bootable disk souce
1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes
1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows
1914287 - Bring back selfLink
1914301 - User VM Template source should show the same provider as template itself
1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs
1914309 - /terminal page when WTO not installed shows nonsensical error
1914334 - order of getting started samples is arbitrary
1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x
1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI
1914405 - Quick search modal should be opened when coming back from a selection
1914407 - Its not clear that node-ca is running as non-root
1914427 - Count of pods on the dashboard is incorrect
1914439 - Typo in SRIOV port create command example
1914451 - cluster-storage-operator pod running as root
1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true
1914642 - Customize Wizard Storage tab does not pass validation
1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling
1914793 - device names should not be translated
1914894 - Warn about using non-groupified api version
1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug
1914932 - Put correct resource name in relatedObjects
1914938 - PVC disk is not shown on customization wizard general tab
1914941 - VM Template rootdisk is not deleted after fetching default disk bus
1914975 - Collect logs from openshift-sdn namespace
1915003 - No estimate of average node readiness during lifetime of a cluster
1915027 - fix MCS blocking iptables rules
1915041 - s3:ListMultipartUploadParts is relied on implicitly
1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons
1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours
1915085 - Pods created and rapidly terminated get stuck
1915114 - [aws-c2s] worker machines are not create during install
1915133 - Missing default pinned nav items in dev perspective
1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource
1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot
1915188 - Remove HostSubnet anonymization
1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment
1915217 - OKD payloads expect to be signed with production keys
1915220 - Remove dropdown workaround for user settings
1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure
1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod
1915277 - [e2e][automation]fix cdi upload form test
1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout
1915304 - Updating scheduling component builder & base images to be consistent with ART
1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node
1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection
1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod
1915357 - Dev Catalog doesn't load anything if virtualization operator is installed
1915379 - New template wizard should require provider and make support input a dropdown type
1915408 - Failure in operator-registry kind e2e test
1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation
1915460 - Cluster name size might affect installations
1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance
1915540 - Silent 4.7 RHCOS install failure on ppc64le
1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)
1915582 - p&f: carry upstream pr 97860
1915594 - [e2e][automation] Improve test for disk validation
1915617 - Bump bootimage for various fixes
1915624 - "Please fill in the following field: Template provider" blocks customize wizard
1915627 - Translate Guided Tour text.
1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error
1915647 - Intermittent White screen when the connector dragged to revision
1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased
1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found"
1915661 - Can't run the 'oc adm prune' command in a pod
1915672 - Kuryr doesn't work with selfLink disabled.
1915674 - Golden image PVC creation - storage size should be taken from the template
1915685 - Message for not supported template is not clear enough
1915760 - Need to increase timeout to wait rhel worker get ready
1915793 - quick starts panel syncs incorrectly across browser windows
1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster
1915818 - vsphere-problem-detector: use "_totals" in metrics
1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol
1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version
1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0
1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics
1915885 - Kuryr doesn't support workers running on multiple subnets
1915898 - TaskRun log output shows "undefined" in streaming
1915907 - test/cmd/builds.sh uses docker.io
1915912 - sig-storage-csi-snapshotter image not available
1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART
1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard
1915939 - Resizing the browser window removes Web Terminal Icon
1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7
1915962 - ROKS: manifest with machine health check fails to apply in 4.7
1915972 - Global configuration breadcrumbs do not work as expected
1915981 - Install ethtool and conntrack in container for debugging
1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception
1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups
1916021 - OLM enters infinite loop if Pending CSV replaces itself
1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry
1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations
1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk
1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration
1916145 - Explicitly set minimum versions of python libraries
1916164 - Update csi-driver-nfs builder & base images to be consistent with ART
1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7
1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third
1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2
1916379 - error metrics from vsphere-problem-detector should be gauge
1916382 - Can't create ext4 filesystems with Ignition
1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates
1916401 - Deleting an ingress controller with a bad DNS Record hangs
1916417 - [Kuryr] Must-gather does not have all Custom Resources information
1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image
1916454 - teach CCO about upgradeability from 4.6 to 4.7
1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation
1916502 - Boot disk mirroring fails with mdadm error
1916524 - Two rootdisk shows on storage step
1916580 - Default yaml is broken for VM and VM template
1916621 - oc adm node-logs examples are wrong
1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret.
1916692 - Possibly fails to destroy LB and thus cluster
1916711 - Update Kube dependencies in MCO to 1.20.0
1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6
1916764 - editing a workload with no application applied, will auto fill the app
1916834 - Pipeline Metrics - Text Updates
1916843 - collect logs from openshift-sdn-controller pod
1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed
1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually
1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited
1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together"
1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace
1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document
1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error
1917117 - Common templates - disks screen: invalid disk name
1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created
1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator
1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable.
1917148 - [oVirt] Consume 23-10 ovirt sdk
1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened
1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console
1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory
1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7
1917327 - annotations.message maybe wrong for NTOPodsNotReady alert
1917367 - Refactor periodic.go
1917371 - Add docs on how to use the built-in profiler
1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console
1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui
1917484 - [BM][IPI] Failed to scale down machineset
1917522 - Deprecate --filter-by-os in oc adm catalog mirror
1917537 - controllers continuously busy reconciling operator
1917551 - use min_over_time for vsphere prometheus alerts
1917585 - OLM Operator install page missing i18n
1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types
1917605 - Deleting an exgw causes pods to no longer route to other exgws
1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API
1917656 - Add to Project/application for eventSources from topology shows 404
1917658 - Show TP badge for sources powered by camel connectors in create flow
1917660 - Editing parallelism of job get error info
1917678 - Could not provision pv when no symlink and target found on rhel worker
1917679 - Hide double CTA in admin pipelineruns tab
1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster.
1917759 - Console operator panics after setting plugin that does not exists to the console-operator config
1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0
1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0
1917799 - Gather s list of names and versions of installed OLM operators
1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error
1917814 - Show Broker create option in eventing under admin perspective
1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types
1917872 - [oVirt] rebase on latest SDK 2021-01-12
1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image
1917938 - upgrade version of dnsmasq package
1917942 - Canary controller causes panic in ingress-operator
1918019 - Undesired scrollbars in markdown area of QuickStart
1918068 - Flaky olm integration tests
1918085 - reversed name of job and namespace in cvo log
1918112 - Flavor is not editable if a customize VM is created from cli
1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources
1918132 - i18n: Volume Snapshot Contents menu is not translated
1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2
1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP
1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page
1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections
1918318 - Kamelet connector's are not shown in eventing section under Admin perspective
1918351 - Gather SAP configuration (SCC & ClusterRoleBinding)
1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews
1918395 - [ovirt] increase livenessProbe period
1918415 - MCD nil pointer on dropins
1918438 - [ja_JP, zh_CN] Serverless i18n misses
1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig
1918471 - CustomNoUpgrade Feature gates are not working correctly
1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk
1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART
1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART
1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART
1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197
1918639 - Event listener with triggerRef crashes the console
1918648 - Subscription page doesn't show InstallPlan correctly
1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack
1918748 - helmchartrepo is not http(s)_proxy-aware
1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI
1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin
1918826 - Insights popover icons are not horizontally aligned
1918879 - need better debug for bad pull secrets
1918958 - The default NMstate instance from the operator is incorrect
1919097 - Close bracket ")" missing at the end of the sentence in the UI
1919231 - quick search modal cut off on smaller screens
1919259 - Make "Add x" singular in Pipeline Builder
1919260 - VM Template list actions should not wrap
1919271 - NM prepender script doesn't support systemd-resolved
1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART
1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry
1919379 - dotnet logo out of date
1919387 - Console login fails with no error when it can't write to localStorage
1919396 - A11y Violation: svg-img-alt on Pod Status ring
1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified
1919750 - Search InstallPlans got Minified React error
1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted
1919823 - OCP 4.7 Internationalization Chinese tranlate issue
1919851 - Visualization does not render when Pipeline & Task share same name
1919862 - The tip information foroc new-project --skip-config-writeis wrong
1919876 - VM created via customize wizard cannot inherit template's PVC attributes
1919877 - Click on KSVC breaks with white screen
1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment
1919945 - user entered name value overridden by default value when selecting a git repository
1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference
1919970 - NTO does not update when the tuned profile is updated.
1919999 - Bump Cluster Resource Operator Golang Versions
1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration
1920200 - user-settings network error results in infinite loop of requests
1920205 - operator-registry e2e tests not working properly
1920214 - Bump golang to 1.15 in cluster-resource-override-admission
1920248 - re-running the pipelinerun with pipelinespec crashes the UI
1920320 - VM template field is "Not available" if it's created from common template
1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs
1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off
1920426 - Egress Router CNI OWNERS file should have ovn-k team members
1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username
1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time
1920438 - openshift-tuned panics on turning debugging on/off.
1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn
1920481 - kuryr-cni pods using unreasonable amount of CPU
1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof
1920524 - Topology graph crashes adding Open Data Hub operator
1920526 - catalog operator causing CPU spikes and bad etcd performance
1920551 - Boot Order is not editable for Templates in "openshift" namespace
1920555 - bump cluster-resource-override-admission api dependencies
1920571 - fcp multipath will not recover failed paths automatically
1920619 - Remove default scheduler profile value
1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present
1920674 - MissingKey errors in bindings namespace
1920684 - Text in language preferences modal is misleading
1920695 - CI is broken because of bad image registry reference in the Makefile
1920756 - update generic-admission-server library to get the system:masters authorization optimization
1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set
1920771 - i18n: Delete persistent volume claim drop down is not translated
1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI
1920912 - Unable to power off BMH from console
1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2"
1920984 - [e2e][automation] some menu items names are out dated
1921013 - Gather PersistentVolume definition (if any) used in image registry config
1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)
1921087 - 'start next quick start' link doesn't work and is unintuitive
1921088 - test-cmd is failing on volumes.sh pretty consistently
1921248 - Clarify the kubelet configuration cr description
1921253 - Text filter default placeholder text not internationalized
1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window
1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo
1921277 - Fix Warning and Info log statements to handle arguments
1921281 - oc get -o yaml --export returns "error: unknown flag: --export"
1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist
1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI
1921572 - For external source (i.e GitHub Source) form view as well shows yaml
1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass
1921610 - Pipeline metrics font size inconsistency
1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1921655 - [OSP] Incorrect error handling during cloudinfo generation
1921713 - [e2e][automation] fix failing VM migration tests
1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view
1921774 - delete application modal errors when a resource cannot be found
1921806 - Explore page APIResourceLinks aren't i18ned
1921823 - CheckBoxControls not internationalized
1921836 - AccessTableRows don't internationalize "User" or "Group"
1921857 - Test flake when hitting router in e2e tests due to one router not being up to date
1921880 - Dynamic plugins are not initialized on console load in production mode
1921911 - Installer PR #4589 is causing leak of IAM role policy bindings
1921921 - "Global Configuration" breadcrumb does not use sentence case
1921949 - Console bug - source code URL broken for gitlab self-hosted repositories
1921954 - Subscription-related constraints in ResolutionFailed events are misleading
1922015 - buttons in modal header are invisible on Safari
1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated
1922050 - [e2e][automation] Improve vm clone tests
1922066 - Cannot create VM from custom template which has extra disk
1922098 - Namespace selection dialog is not closed after select a namespace
1922099 - Updated Readme documentation for QE code review and setup
1922146 - Egress Router CNI doesn't have logging support.
1922267 - Collect specific ADFS error
1922292 - Bump RHCOS boot images for 4.7
1922454 - CRI-O doesn't enable pprof by default
1922473 - reconcile LSO images for 4.8
1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace
1922782 - Source registry missing docker:// in yaml
1922907 - Interop UI Tests - step implementation for updating feature files
1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons
1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD
1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything
1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources
1923102 - [vsphere-problem-detector-operator] pod's version is not correct
1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot
1923674 - k8s 1.20 vendor dependencies
1923721 - PipelineRun running status icon is not rotating
1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios
1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator
1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator
1923874 - Unable to specify values with % in kubeletconfig
1923888 - Fixes error metadata gathering
1923892 - Update arch.md after refactor.
1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator
1923895 - Changelog generation.
1923911 - [e2e][automation] Improve tests for vm details page and list filter
1923945 - PVC Name and Namespace resets when user changes os/flavor/workload
1923951 - EventSources showsundefined` in project
1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins
1924046 - Localhost: Refreshing on a Project removes it from nav item urls
1924078 - Topology quick search View all results footer should be sticky.
1924081 - NTO should ship the latest Tuned daemon release 2.15
1924084 - backend tests incorrectly hard-code artifacts dir
1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build
1924135 - Under sufficient load, CRI-O may segfault
1924143 - Code Editor Decorator url is broken for Bitbucket repos
1924188 - Language selector dropdown doesn't always pre-select the language
1924365 - Add extra disk for VM which use boot source PXE
1924383 - Degraded network operator during upgrade to 4.7.z
1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box.
1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on
1924583 - Deprectaed templates are listed in the Templates screen
1924870 - pick upstream pr#96901: plumb context with request deadline
1924955 - Images from Private external registry not working in deploy Image
1924961 - k8sutil.TrimDNS1123Label creates invalid values
1924985 - Build egress-router-cni for both RHEL 7 and 8
1925020 - Console demo plugin deployment image shoult not point to dockerhub
1925024 - Remove extra validations on kafka source form view net section
1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running
1925072 - NTO needs to ship the current latest stalld v1.7.0
1925163 - Missing info about dev catalog in boot source template column
1925200 - Monitoring Alert icon is missing on the workload in Topology view
1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1
1925319 - bash syntax error in configure-ovs.sh script
1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data
1925516 - Pipeline Metrics Tooltips are overlapping data
1925562 - Add new ArgoCD link from GitOps application environments page
1925596 - Gitops details page image and commit id text overflows past card boundary
1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test
1926588 - The tarball of operator-sdk is not ready for ocp4.7
1927456 - 4.7 still points to 4.6 catalog images
1927500 - API server exits non-zero on 2 SIGTERM signals
1929278 - Monitoring workloads using too high a priorityclass
1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
1929920 - Cluster monitoring documentation link is broken - 404 not found
- References:
https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .
Bug Fix(es):
-
Configuring the system with non-RT kernel will hang the system (BZ#1923220)
-
Bugs fixed (https://bugzilla.redhat.com/):
1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service
- JIRA issues fixed (https://issues.jboss.org/):
CNF-802 - Infrastructure-provided enablement/disablement of interrupt processing for guaranteed pod CPUs CNF-854 - Performance tests in CNF Tests
- Description:
Red Hat OpenShift Do (odo) is a simple CLI tool for developers to create, build, and deploy applications on OpenShift. The odo tool is completely client-based and requires no server within the OpenShift cluster for deployment. It detects changes to local code and deploys it to the cluster automatically, giving instant feedback to validate changes in real-time. It supports multiple programming languages and frameworks.
The advisory addresses the following issues:
-
Re-release of odo-init-image 1.1.3 for security updates
-
Solution:
Download and install a new CLI binary by following the instructions linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
1832983 - Release of 1.1.3 odo-init-image
- Solution:
See the documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/ 4.6/html/serverless_applications/index
- Bugs fixed (https://bugzilla.redhat.com/):
1874857 - CVE-2020-24553 golang: default Content-Type setting in net/http/cgi and net/http/fcgi could cause XSS 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897643 - CVE-2020-28366 golang: malicious symbol names can lead to code execution at build time 1897646 - CVE-2020-28367 golang: improper validation of cgo flags can lead to code execution at build time 1906381 - Release of OpenShift Serverless Serving 1.12.0 1906382 - Release of OpenShift Serverless Eventing 1.12.0
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202001-1866",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "libxml2",
"scope": "eq",
"trust": 1.8,
"vendor": "xmlsoft",
"version": "2.9.10"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.10.0"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "real user experience insight",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.3.1.0"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"model": "snapdrive",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "symantec netbackup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "30"
},
{
"model": "mysql workbench",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "sinema remote connect server",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "12.04"
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"model": "enterprise manager base platform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.5.0.0"
},
{
"model": "enterprise manager ops center",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.4.0.0"
},
{
"model": "real user experience insight",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.4.1.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "31"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.10"
},
{
"model": "real user experience insight",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.5.1.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"model": "enterprise manager base platform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.4.0.0"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"model": "libxml2",
"scope": "eq",
"trust": 0.8,
"vendor": "xmlsoft",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:2.9.10:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:lts:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:esm:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:19.10:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:12.04:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinema_remote_connect_server:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapdrive:-:*:*:*:*:windows:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:symantec_netbackup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:real_user_experience_insight:13.3.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:13.4.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:13.5.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.26",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:real_user_experience_insight:13.4.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:real_user_experience_insight:13.5.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_function_cloud_native_environment:1.10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "160624"
},
{
"db": "PACKETSTORM",
"id": "160889"
},
{
"db": "PACKETSTORM",
"id": "160125"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "161546"
},
{
"db": "PACKETSTORM",
"id": "161548"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 1.4
},
"cve": "CVE-2020-7595",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Partial",
"baseScore": 5.0,
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2020-7595",
"impactScore": null,
"integrityImpact": "None",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.9,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-185720",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.5,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2020-7595",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2020-7595",
"trust": 1.8,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202001-965",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-185720",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2020-7595",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation. libxml2 is a function library written in C language for parsing XML documents. There is a security vulnerability in the xmlStringLenDecodeEntities of the parser.c file in libxml2 version 2.9.10. It exists that libxml2 incorrectly handled certain XML files. \n(CVE-2019-19956, CVE-2020-7595). In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1836815 - Gather image registry config (backport to 4.3)\n1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist\n1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator\n1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized\n1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID: RHSA-2020:5633-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:5633\nIssue date: 2021-02-24\nCVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error : the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress: Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress: Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress: Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\" \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe Translation of \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]: Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs. RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027 to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation] fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nBug Fix(es):\n\n* Configuring the system with non-RT kernel will hang the system\n(BZ#1923220)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nCNF-802 - Infrastructure-provided enablement/disablement of interrupt processing for guaranteed pod CPUs\nCNF-854 - Performance tests in CNF Tests\n\n6. Description:\n\nRed Hat OpenShift Do (odo) is a simple CLI tool for developers to create,\nbuild, and deploy applications on OpenShift. The odo tool is completely\nclient-based and requires no server within the OpenShift cluster for\ndeployment. It detects changes to local code and deploys it to the cluster\nautomatically, giving instant feedback to validate changes in real-time. It\nsupports multiple programming languages and frameworks. \n\nThe advisory addresses the following issues:\n\n* Re-release of odo-init-image 1.1.3 for security updates\n\n3. Solution:\n\nDownload and install a new CLI binary by following the instructions linked\nfrom the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1832983 - Release of 1.1.3 odo-init-image\n\n5. Solution:\n\nSee the documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/\n4.6/html/serverless_applications/index\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1874857 - CVE-2020-24553 golang: default Content-Type setting in net/http/cgi and net/http/fcgi could cause XSS\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897643 - CVE-2020-28366 golang: malicious symbol names can lead to code execution at build time\n1897646 - CVE-2020-28367 golang: improper validation of cgo flags can lead to code execution at build time\n1906381 - Release of OpenShift Serverless Serving 1.12.0\n1906382 - Release of OpenShift Serverless Eventing 1.12.0\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-7595"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "PACKETSTORM",
"id": "160624"
},
{
"db": "PACKETSTORM",
"id": "160889"
},
{
"db": "PACKETSTORM",
"id": "160125"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "161546"
},
{
"db": "PACKETSTORM",
"id": "161548"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "160961"
}
],
"trust": 2.52
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-7595",
"trust": 3.4
},
{
"db": "SIEMENS",
"id": "SSA-292794",
"trust": 1.8
},
{
"db": "ICS CERT",
"id": "ICSA-21-103-08",
"trust": 1.8
},
{
"db": "PACKETSTORM",
"id": "161916",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU96269392",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "159851",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159349",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "162694",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159639",
"trust": 0.7
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.0584",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1207",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3535",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2604",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1744",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0902",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4513",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1242",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1727",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3364",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1564",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2162",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1826",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0234",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3631",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0864",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0471",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0845",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3868",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0986",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3550",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0691",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3248",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4100",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3102",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0319",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1193",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0171",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3072",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0099",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1638",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4058",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "158168",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021041514",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021091331",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021052216",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072097",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021111735",
"trust": 0.6
},
{
"db": "CNVD",
"id": "CNVD-2020-04827",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-185720",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2020-7595",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160624",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160889",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160125",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159661",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "161546",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "161548",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160961",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "PACKETSTORM",
"id": "160624"
},
{
"db": "PACKETSTORM",
"id": "160889"
},
{
"db": "PACKETSTORM",
"id": "160125"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "161546"
},
{
"db": "PACKETSTORM",
"id": "161548"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"id": "VAR-202001-1866",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T19:54:45.829000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "0e1a49c8",
"trust": 0.8,
"url": "https://gitlab.gnome.org/gnome/libxml2/commit/0e1a49c89076"
},
{
"title": "libxml2 Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=109237"
},
{
"title": "Debian CVElist Bug Report Logs: libxml2: CVE-2020-7595",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=8128495aba3a49b2f3e0b9ee0e8401af"
},
{
"title": "Ubuntu Security Notice: libxml2 vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-4274-1"
},
{
"title": "Red Hat: Moderate: libxml2 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204479 - security advisory"
},
{
"title": "Red Hat: Moderate: libxml2 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20203996 - security advisory"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2020-7595 log"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP3 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20202646 - security advisory"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP3 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20202644 - security advisory"
},
{
"title": "Amazon Linux AMI: ALAS-2020-1438",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2020-1438"
},
{
"title": "Arch Linux Advisories: [ASA-202011-15] libxml2: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202011-15"
},
{
"title": "Amazon Linux 2: ALAS2-2020-1534",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2020-1534"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=0d160980ab72db34060d62c89304b6f2"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless 1.11.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205149 - security advisory"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.6 runner release (CVE-2019-18874)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204255 - security advisory"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.7 runner release (CVE-2019-18874)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204254 - security advisory"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless 1.12.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210146 - security advisory"
},
{
"title": "Red Hat: Low: OpenShift Container Platform 4.3.40 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204264 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210190 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210436 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210050 - security advisory"
},
{
"title": "IBM: Security Bulletin: IBM Security Guardium is affected by multiple vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3201548b0e11fd3ecd83fd36fc045a8e"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205605 - security advisory"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-835",
"trust": 1.1
},
{
"problemtype": "infinite loop (CWE-835) [NVD Evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.5,
"url": "https://usn.ubuntu.com/4274-1/"
},
{
"trust": 2.4,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-103-08"
},
{
"trust": 2.4,
"url": "https://www.oracle.com/security-alerts/cpujul2020.html"
},
{
"trust": 1.8,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-292794.pdf"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20200702-0005/"
},
{
"trust": 1.8,
"url": "https://security.gentoo.org/glsa/202010-04"
},
{
"trust": 1.8,
"url": "https://gitlab.gnome.org/gnome/libxml2/commit/0e1a49c89076"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2020/09/msg00009.html"
},
{
"trust": 1.8,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-05/msg00047.html"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-7595"
},
{
"trust": 1.4,
"url": "https://access.redhat.com/security/cve/cve-2020-7595"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/545spoi3zppnpx4tfrive4jvrtjrkull/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5r55zr52rmbx24tqtwhciwkjvrv6yawi/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jdpf3aavkuakdyfmfksiqsvvs3eefpqh/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu96269392/index.html"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-20388"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-19956"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5r55zr52rmbx24tqtwhciwkjvrv6yawi/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/545spoi3zppnpx4tfrive4jvrtjrkull/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jdpf3aavkuakdyfmfksiqsvvs3eefpqh/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-20907"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13050"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-20218"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-19221"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-1751"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-16168"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-9327"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-16935"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-5018"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-1730"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-19906"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-20387"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-1752"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-8492"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-20454"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13627"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-6405"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-14889"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13632"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-10029"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13630"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13631"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6455281"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3535/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0902/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3248/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021052216"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2162/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1727"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1207"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-appliance-is-affected-by-libxml2-vulnerabilities-cve-2019-19956-cve-2019-20388-cve-2020-7595/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-4/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0171/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3072"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-bladecenter-advanced-management-module-amm-is-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4100/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6520474"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162694/red-hat-security-advisory-2021-2021-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4058"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1638/"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/libxml2-infinite-loop-via-xmlstringlendecodeentities-31396"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3868/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1744"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072097"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158168/red-hat-security-advisory-2020-2646-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021111735"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0319/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0471/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-network-security-is-affected-by-multiple-vulnerabilities-2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-6/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1193"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1564/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-flex-system-chassis-management-module-cmm-is-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0986"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-bootable-media-creator-bomc-is-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159349/red-hat-security-advisory-2020-3996-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-6/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021091331"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2604"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159851/red-hat-security-advisory-2020-4479-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1242"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021041514"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1826/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159639/gentoo-linux-security-advisory-202010-04.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3102/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3550"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161916/red-hat-security-advisory-2021-0949-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-rackswitch-firmware-products-are-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-5/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3631/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3364/"
},
{
"trust": 0.5,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-20916"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-14422"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-1971"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-15165"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14382"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-8177"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24659"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9925"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9802"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9895"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8625"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8812"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3899"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8819"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3867"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8720"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9893"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8808"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3902"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-18197"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3900"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9805"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8820"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9807"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8769"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8710"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8813"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9850"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8811"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9803"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9862"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3885"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-15503"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-10018"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8835"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8764"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8844"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3865"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3864"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14391"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3862"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3901"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8823"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3895"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-11793"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9894"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8816"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9843"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8771"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3897"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9806"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8814"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8743"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9915"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8815"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8783"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-20807"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14040"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-11068"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8766"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8846"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3868"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-3894"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8782"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13631"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10029"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13630"
},
{
"trust": 0.3,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16300"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-10105"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-15166"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16230"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16229"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14882"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16227"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14461"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14464"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14469"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14880"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-1551"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14468"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14466"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14467"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14462"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14881"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16451"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-10103"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16228"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14463"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14879"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14019"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14470"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14465"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-16452"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1752"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless_applications/index"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13632"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1751"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-12749"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17023"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-6829"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12403"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-11756"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12243"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17498"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17006"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-5094"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-2752"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20386"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-2574"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17546"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12400"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-11727"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12402"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12401"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-11719"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-5188"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-25211"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17450"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/errata/rhsa-2020:5633"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-28362"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27813"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/835.html"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=949582"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18609"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:5605"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25660"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15586"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8237"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0050"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27831"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27832"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:5149"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4264"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-2974"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19126"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2226"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2780"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2974"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14352"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2225"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12825"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18190"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2181"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-12652"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2182"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.3/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-18190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2224"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9283"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19126"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19770"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11668"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25684"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24490"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-2007"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19072"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8649"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26160"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12655"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9458"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13225"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13249"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27846"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20636"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18808"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18809"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20054"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12826"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25683"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19602"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25661"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25641"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-6977"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8647"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29652"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15917"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16166"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10774"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7774"
},
{
"trust": 0.1,
"url": "https://\u0027"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0305"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12659"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15157"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-6978"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0444"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-16233"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25694"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-14553"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25682"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3884"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10763"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10942"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8622"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19062"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19046"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12465"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19447"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25685"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-16231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14381"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19056"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12770"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19767"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19533"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25686"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19537"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2922"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25687"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-16167"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9455"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11565"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25681"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19063"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19319"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8563"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10732"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:5634"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10726"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10723"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10725"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10723"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10725"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10722"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10722"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10726"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:5364"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0949"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_developer_cli/installing-odo.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-6829"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0146"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24553"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24553"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24659"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28366"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28366"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28367"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28367"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "PACKETSTORM",
"id": "160624"
},
{
"db": "PACKETSTORM",
"id": "160889"
},
{
"db": "PACKETSTORM",
"id": "160125"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "161546"
},
{
"db": "PACKETSTORM",
"id": "161548"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"db": "PACKETSTORM",
"id": "160624"
},
{
"db": "PACKETSTORM",
"id": "160889"
},
{
"db": "PACKETSTORM",
"id": "160125"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "161546"
},
{
"db": "PACKETSTORM",
"id": "161548"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2020-01-21T00:00:00",
"db": "VULHUB",
"id": "VHN-185720"
},
{
"date": "2020-01-21T00:00:00",
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"date": "2020-02-07T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"date": "2020-12-18T19:14:41",
"db": "PACKETSTORM",
"id": "160624"
},
{
"date": "2021-01-11T16:29:48",
"db": "PACKETSTORM",
"id": "160889"
},
{
"date": "2020-11-18T20:48:43",
"db": "PACKETSTORM",
"id": "160125"
},
{
"date": "2020-10-21T15:40:32",
"db": "PACKETSTORM",
"id": "159661"
},
{
"date": "2021-02-25T15:29:25",
"db": "PACKETSTORM",
"id": "161546"
},
{
"date": "2021-02-25T15:30:03",
"db": "PACKETSTORM",
"id": "161548"
},
{
"date": "2021-03-22T15:36:55",
"db": "PACKETSTORM",
"id": "161916"
},
{
"date": "2021-01-15T15:06:55",
"db": "PACKETSTORM",
"id": "160961"
},
{
"date": "2020-01-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"date": "2020-01-21T23:15:13.867000",
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-07-25T00:00:00",
"db": "VULHUB",
"id": "VHN-185720"
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"date": "2021-06-16T04:57:00",
"db": "JVNDB",
"id": "JVNDB-2020-001451"
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"date": "2023-11-07T03:26:07.513000",
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "160624"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "libxml2\u00a0 Infinite loop vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-001451"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 0.6
}
}
VAR-202207-0107
Vulnerability from variot - Updated: 2024-07-23 19:53AES OCB mode for 32-bit x86 platforms using the AES-NI assembly optimised implementation will not encrypt the entirety of the data under some circumstances. This could reveal sixteen bytes of data that was preexisting in the memory that wasn't written. In the special case of "in place" encryption, sixteen bytes of the plaintext would be revealed. Since OpenSSL does not support OCB based cipher suites for TLS and DTLS, they are both unaffected. Fixed in OpenSSL 3.0.5 (Affected 3.0.0-3.0.4). Fixed in OpenSSL 1.1.1q (Affected 1.1.1-1.1.1p). The issue in CVE-2022-1292 did not find other places in the c_rehash script where it possibly passed the file names of certificates being hashed to a command executed through the shell. Some operating systems distribute this script in a manner where it is automatically executed. On these operating systems, this flaw allows an malicious user to execute arbitrary commands with the privileges of the script. (CVE-2022-2097). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Gentoo Linux Security Advisory GLSA 202210-02
https://security.gentoo.org/
Severity: Normal Title: OpenSSL: Multiple Vulnerabilities Date: October 16, 2022 Bugs: #741570, #809980, #832339, #835343, #842489, #856592 ID: 202210-02
Synopsis
Multiple vulnerabilities have been discovered in OpenSSL, the worst of which could result in denial of service.
Background
OpenSSL is an Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general purpose cryptography library.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/openssl < 1.1.1q >= 1.1.1q
Description
Multiple vulnerabilities have been discovered in OpenSSL. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All OpenSSL users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/openssl-1.1.1q"
References
[ 1 ] CVE-2020-1968 https://nvd.nist.gov/vuln/detail/CVE-2020-1968 [ 2 ] CVE-2021-3711 https://nvd.nist.gov/vuln/detail/CVE-2021-3711 [ 3 ] CVE-2021-3712 https://nvd.nist.gov/vuln/detail/CVE-2021-3712 [ 4 ] CVE-2021-4160 https://nvd.nist.gov/vuln/detail/CVE-2021-4160 [ 5 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 6 ] CVE-2022-1292 https://nvd.nist.gov/vuln/detail/CVE-2022-1292 [ 7 ] CVE-2022-1473 https://nvd.nist.gov/vuln/detail/CVE-2022-1473 [ 8 ] CVE-2022-2097 https://nvd.nist.gov/vuln/detail/CVE-2022-2097
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-02
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update Advisory ID: RHSA-2022:6156-01 Product: RHODF Advisory URL: https://access.redhat.com/errata/RHSA-2022:6156 Issue date: 2022-08-24 CVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528 CVE-2022-0235 CVE-2022-0536 CVE-2022-0670 CVE-2022-1292 CVE-2022-1586 CVE-2022-1650 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24675 CVE-2022-24771 CVE-2022-24772 CVE-2022-24773 CVE-2022-24785 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-29526 CVE-2022-29810 CVE-2022-29824 CVE-2022-31129 ==================================================================== 1. Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
eventsource: Exposure of Sensitive Information (CVE-2022-1650)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
nodejs-set-value: type confusion allows bypass of CVE-2019-10747 (CVE-2021-23440)
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
node-forge: Signature verification leniency in checking
digestAlgorithmstructure can lead to signature forgery (CVE-2022-24771) -
node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
node-forge: Signature verification leniency in checking
DigestInfostructure (CVE-2022-24773) -
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. For details on how to apply this update, refer to: https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937117 - Deletion of StorageCluster doesn't remove ceph toolbox pod
1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified
1973317 - libceph: read_partial_message and bad crc/signature errors
1996829 - Permissions assigned to ceph auth principals when using external storage are too broad
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747
2027724 - Warning log for rook-ceph-toolbox in ocs-operator log
2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056697 - odf-csi-addons-operator subscription failed while using custom catalog source
2058211 - Add validation for CIDR field in DRPolicy
2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced
2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10
2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE
2063691 - [GSS] [RFE] Add termination policy to s3 route
2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2066514 - OCS operator to install Ceph prometheus alerts instead of Rook
2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo structure
2069314 - OCS external mode should allow specifying names for all Ceph auth principals
2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
2069812 - must-gather: rbd_vol_and_snap_info collection is broken
2069815 - must-gather: essential rbd mirror command outputs aren't collected
2070542 - After creating a new storage system it redirects to 404 error page instead of the "StorageSystems" page for OCP 4.11
2071494 - [DR] Applications are not getting deployed
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty
2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled
2075426 - 4.10 must gather is not available after GA of 4.10
2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in "Progressing" state although all the openshift-storage pods are up and Running
2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost
2077242 - vg-manager missing permissions
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2079866 - [DR] odf-multicluster-console is in CLBO state
2079873 - csi-nfsplugin pods are not coming up after successful patch request to update "ROOK_CSI_ENABLE_NFS": "true"'
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2081680 - Add the LVM Operator into the Storage category in OperatorHub
2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation
2082078 - OBC's not getting created on primary cluster when manageds3 set as "true" for mirrorPeer
2082497 - Do not filter out removable devices
2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)
2083441 - LVM operator should deploy the volumesnapshotclass resource
2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status
2083993 - Add missing pieces for storageclassclaim
2084041 - [Console Migration] Link-able storage system name directs to blank page
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided"
2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates
2084546 - [Console Migration] Provider details absent under backing store in UI
2084565 - [Console Migration] The creation of new backing store , directs to a blank page
2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information
2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred
2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace
2086557 - Thin pool in lvm operator doesn't use all disks
2086675 - [UI]No option to "add capacity" via the Installed Operators tab
2086982 - ODF 4.11 deployment is failing
2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm
2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2087107 - Set default storage class if none is set
2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message
2087675 - ocs-metrics-exporter pod crashes on odf v4.11
2087732 - [Console Migration] Events page missing under new namespace store
2087755 - [Console Migration] Bucket Class details page doesn't have the complete details in UI
2088359 - Send VG Metrics even if storage is being consumed from thinPool alone
2088380 - KMS using vault on standalone MCG cluster is not enabled
2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint
2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook
2089296 - [MS v2] Storage cluster in error phase and 'ocs-provider-qe' addon installation failed with ODF 4.10.2
2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts
2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9.
2089552 - [MS v2] Cannot create StorageClassClaim
2089567 - [Console Migration] Improve the styling of Various Components
2089786 - [Console Migration] "Attach to deployment" option is missing in kebab menu for Object Bucket Claims .
2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket.
2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed
2090278 - [LVMO] Some containers are missing resource requirements and limits
2090314 - [LVMO] CSV is missing some useful annotations
2090953 - [MCO] DRCluster created under default namespace
2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics
2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool.
2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference
2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external
2091894 - All backingstores in cluster spontaneously change their own secret
2091951 - [GSS] OCS pods are restarting due to liveness probe failure
2091998 - Volume Snapshots not work with external restricted mode
2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool
2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks
2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)
2092349 - Enable zeroing on the thin-pool during creation
2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase
2092400 - [MS v2] StorageClassClaim creation is failing with error "no StorageCluster found"
2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically
2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2094179 - MCO fails to create DRClusters when replication mode is synchronous
2094853 - [Console Migration] Description under storage class drop down in add capacity is missing .
2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2095155 - Use tool black to format the python external script
2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster
2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time
2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page
2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened
2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False
2096937 - Storage - Data Foundation: i18n misses
2097216 - Collect StorageClassClaim details in must-gather
2097287 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2097305 - Add translations for ODF 4.11
2098121 - Managed ODF not getting detected
2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled
2099581 - StorageClassClaim with encryption gets into Failed state
2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project
2099646 - Block pool list page kebab action menu is showing empty options
2099660 - OCS dashbaords not appearing unless user clicks on "Overview" Tab
2099724 - S3 secret namespace on the managed cluster doesn't match with the namespace in the s3profile
2099965 - rbd: provide option to disable setting metadata on RBD images
2100326 - [ODF to ODF] Volume snapshot creation failed
2100352 - Make lvmo pod labels more uniform
2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install
2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection
2103818 - Restored snapshot don't have any content
2104833 - Need to update configmap for IBM storage odf operator GA
2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- References:
https://access.redhat.com/security/cve/CVE-2021-23440 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0536 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1650 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24771 https://access.redhat.com/security/cve/CVE-2022-24772 https://access.redhat.com/security/cve/CVE-2022-24773 https://access.redhat.com/security/cve/CVE-2022-24785 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy Xo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N b3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY sJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj gz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am 2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf mX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl CSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE hTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY 3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E 2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V ERCapzwcRuQ=gV+z -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security fix:
- CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
Bug fixes:
-
Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)
-
RHACM 2.3.12 images (BZ# 2101411)
-
Bugs fixed (https://bugzilla.redhat.com/):
2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation 2101411 - RHACM 2.3.12 images 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- ========================================================================== Ubuntu Security Notice USN-5502-1 July 05, 2022
openssl vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
OpenSSL could be made to expose sensitive information over the network. A remote attacker could possibly use this issue to obtain sensitive information.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libssl3 3.0.2-0ubuntu1.6
Ubuntu 21.10: libssl1.1 1.1.1l-1ubuntu1.6
Ubuntu 20.04 LTS: libssl1.1 1.1.1f-1ubuntu2.16
Ubuntu 18.04 LTS: libssl1.1 1.1.1-1ubuntu2.1~18.04.20
After a standard system update you need to reboot your computer to make all the necessary changes. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/
Security fixes:
-
CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
-
CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
-
CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
Bug fixes:
-
assisted-service repo pin-latest.py script should allow custom tags to be pinned (BZ# 2065661)
-
assisted-service-build image is too big in size (BZ# 2066059)
-
assisted-service pin-latest.py script should exclude the postgres image (BZ# 2076901)
-
PXE artifacts need to be served via HTTP (BZ# 2078531)
-
Implementing new service-agent protocol on agent side (BZ# 2081281)
-
RHACM 2.6.0 images (BZ# 2090906)
-
Assisted service POD keeps crashing after a bare metal host is created (BZ# 2093503)
-
Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled (BZ# 2096106)
-
Fix assisted CI jobs that fail for cluster-info readiness (BZ# 2097696)
-
Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB (BZ# 2099277)
-
The pre-selected search keyword is not readable (BZ# 2107736)
-
The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI (BZ# 2111843)
-
Bugs fixed (https://bugzilla.redhat.com/):
2065661 - assisted-service repo pin-latest.py script should allow custom tags to be pinned 2066059 - assisted-service-build image is too big in size 2076901 - assisted-service pin-latest.py script should exclude the postgres image 2078531 - iPXE artifacts need to be served via HTTP 2081281 - Implementing new service-agent protocol on agent side 2090901 - Capital letters in install-config.yaml .platform.baremetal.hosts[].name cause bootkube errors 2090906 - RHACM 2.6.0 images 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2093503 - Assisted service POD keeps crashing after a bare metal host is created 2096106 - Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled 2096445 - Assisted service POD keeps crashing after a bare metal host is created 2096460 - Spoke BMH stuck "inspecting" when deployed via the converged workflow 2097696 - Fix assisted CI jobs that fail for cluster-info readiness 2099277 - Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB 2103703 - Automatic version upgrade triggered for oadp operator installed by cluster-backup-chart 2104117 - Spoke BMH stuck ?available? after changing a BIOS attribute via the converged workflow 2104984 - Infrastructure operator missing clusterrole permissions for interacting with mutatingwebhookconfigurations 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2105339 - Search Application button on the Application Table for Subscription applications does not Redirect 2105357 - [UI] hypershift cluster creation error - n[0] is undefined 2106347 - Submariner error looking up service account submariner-operator/submariner-addon-sa 2106882 - Security Context Restrictions are restricting creation of some pods which affects the deployment of some applications 2107049 - The clusterrole for global clusterset did not created by default 2107065 - governance-policy-framework in CrashLoopBackOff state on spoke cluster: Failed to start manager {"error": "error listening on :8081: listen tcp :8081: bind: address already in use"} 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107370 - Helm Release resource recreation feature does not work with the local cluster 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal 2108888 - Hypershift on AWS - control plane not running 2109370 - The button to create the cluster is not visible 2111203 - Add ocp 4.11 to filters for discovering clusters in ACM 2.6 2111218 - Create cluster - Infrastructure page crashes 2111651 - "View application" button on app table for Flux applications redirects to apiVersion=ocp instead of flux 2111663 - Hosted cluster in Pending import state 2111671 - Leaked namespaces after deleting hypershift deployment 2111770 - [ACM 2.6] there is no node info for remote cluster in multiple hubs 2111843 - The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI 2112180 - The policy page is crashed after input keywords in the search box 2112281 - config-policy-controller pod can't startup in the OCP3.11 managed cluster 2112318 - Can't delete the objects which are re-created by policy when deleting the policy 2112321 - BMAC reconcile loop never stops after changes 2112426 - No cluster discovered due to x509: certificate signed by unknown authority 2112478 - Value of delayAfterRunSeconds is not shown on the final submit panel and the word itself should not be wrapped. 2112793 - Can't view details of the policy template when set the spec.pruneObjectBehavior as unsupported value 2112803 - ClusterServiceVersion for release 2.6 branch references "latest" tag 2113787 - [ACM 2.6] can not delete namespaces after detaching the hosted cluster 2113838 - the cluster proxy-agent was deployed on the non-infra nodes 2113842 - [ACM 2.6] must restart hosting cluster registration pod if update work-manager-addon cr to change installNamespace 2114982 - Control plane type shows 'Standalone' for hypershift cluster 2115622 - Hub fromsecret function doesn't work for hosted mode in multiple hub 2115723 - Can't view details of the policy template for customer and hypershift cluster in hosted mode from UI 2115993 - Policy automation details panel was not updated after editing the mode back to disabled 2116211 - Count of violations with unknown status was not accurate when managed clusters have mixed status 2116329 - cluster-proxy-agent not startup due to the imagepullbackoff on spoke cluster 2117113 - The proxy-server-host was not correct in cluster-proxy-agent 2117187 - pruneObjectBehavior radio selection cannot work well and always switch the first one template in multiple configurationPolicy templates 2117480 - [ACM 2.6] infra-id of HypershiftDeployment doesn't work 2118338 - Report the "namespace not found" error after clicked view yaml link of a policy in the multiple hub env 2119326 - Can't view details of the SecurityContextConstraints policy for managed clusters from UI
- After the clusters are managed, you can use the APIs that are provided by the engine to distribute configuration based on placement policy. Description:
Openshift Logging Bug Fix Release (5.3.11)
Security Fix(es):
- golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security fix:
-
CVE-2022-30629: gatekeeper-container: golang: crypto/tls: session tickets lack random ticket_age_add
-
CVE-2022-1705: golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962: golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131: golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630: golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631: golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632: golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30635: golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
-
Solution:
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
Upgrade gatekeeper operator: The gatekeeper operator that is installed by the gatekeeper operator policy has
installPlanApprovalset toAutomatic. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApprovaltomanual, then you must view each cluster to manually approve the upgrade to the operator. -
Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'.
Refer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/ for additional information. Bugs fixed (https://bugzilla.redhat.com/):
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
Additional details can be found in the upstream advisories at https://www.openssl.org/news/secadv/20220705.txt and https://www.openssl.org/news/secadv/20230207.txt
For the stable distribution (bullseye), these problems have been fixed in version 1.1.1n-0+deb11u4.
We recommend that you upgrade your openssl packages.
For the detailed security status of openssl please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssl
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmPivONfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0RBCA/+IqJ9qtjytulO41yPphASSEu22XVN9EYAUsdcpsTmnDtp1zUQSZpQv5qk 464Z2+0SkNtiHm5O5z5fs4LX0wXYBvLYrFnh2X2Z6rT+YFhXg8ZdEo+IysYSV7gB utbb1zbSqUSSLmlF/r6SnXy+HlTyB56p+k0MnLNHejes6DoghebZJGU6Dl5D8Z2J wOB6xi2sS3zVl1O+8//PPk5Sha8ESShuP/sBby01Xvpl65+8Icn7dXXHFNUn27rZ WdQCdxJaUJiqjZYzI5XAB+zHl8KNDiWP9MqIeT3g+YQ+nzSTeHxRPXDTDvClMv9y CJ90PaCY1DBNh5NrE2/IZkpIOKvTjRX3+db7Nab2GyRzLCP7p+1Bm14zHiKRHPOR t/6yX11diIF2zvlP/7qeCGkutv9KrFjSW81o1GgJMdt8uduHa95IgKNNUsA6Wf3O SkUP4EYfhXs2+TIfEenvqLuAmLsQBCRCvNDdmEGhtR4r0hpvcJ4eOaDBE6FWih1J i0mpDIjBYOV2iEUe85XfYflrcFfaxSwbl4ultH3Q3eWtiMwLgXqJ9dKRQEXJX7hp 48zKPwnftJbGBri9Y293sMjcpv3F/PTjXMh8LcUSVDkVVdQ8cLSmdmP4v4wSzV/q Z7KATUs6YAod4ts5u3/zD97Mzk0Xiecw/ggevbCfCvQTByk02Fg=lXE/ -----END PGP SIGNATURE-----
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202207-0107",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "sinec ins",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "1.1.1q"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "3.0.5"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "3.0.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "sinec ins",
"scope": "eq",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "1.1.1"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.1.1q",
"versionStartIncluding": "1.1.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0.5",
"versionStartIncluding": "3.0.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168150"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168280"
}
],
"trust": 0.7
},
"cve": "CVE-2022-2097",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULMON",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "CVE-2022-2097",
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "MEDIUM",
"trust": 0.1,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.3,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 3.9,
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-2097",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2022-2097",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "AES OCB mode for 32-bit x86 platforms using the AES-NI assembly optimised implementation will not encrypt the entirety of the data under some circumstances. This could reveal sixteen bytes of data that was preexisting in the memory that wasn\u0027t written. In the special case of \"in place\" encryption, sixteen bytes of the plaintext would be revealed. Since OpenSSL does not support OCB based cipher suites for TLS and DTLS, they are both unaffected. Fixed in OpenSSL 3.0.5 (Affected 3.0.0-3.0.4). Fixed in OpenSSL 1.1.1q (Affected 1.1.1-1.1.1p). The issue in CVE-2022-1292 did not find other places in the `c_rehash` script where it possibly passed the file names of certificates being hashed to a command executed through the shell. Some operating systems distribute this script in a manner where it is automatically executed. On these operating systems, this flaw allows an malicious user to execute arbitrary commands with the privileges of the script. (CVE-2022-2097). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-02\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: OpenSSL: Multiple Vulnerabilities\n Date: October 16, 2022\n Bugs: #741570, #809980, #832339, #835343, #842489, #856592\n ID: 202210-02\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in OpenSSL, the worst of\nwhich could result in denial of service. \n\nBackground\n==========\n\nOpenSSL is an Open Source toolkit implementing the Secure Sockets Layer\n(SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general\npurpose cryptography library. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/openssl \u003c 1.1.1q \u003e= 1.1.1q\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in OpenSSL. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSL users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/openssl-1.1.1q\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-1968\n https://nvd.nist.gov/vuln/detail/CVE-2020-1968\n[ 2 ] CVE-2021-3711\n https://nvd.nist.gov/vuln/detail/CVE-2021-3711\n[ 3 ] CVE-2021-3712\n https://nvd.nist.gov/vuln/detail/CVE-2021-3712\n[ 4 ] CVE-2021-4160\n https://nvd.nist.gov/vuln/detail/CVE-2021-4160\n[ 5 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 6 ] CVE-2022-1292\n https://nvd.nist.gov/vuln/detail/CVE-2022-1292\n[ 7 ] CVE-2022-1473\n https://nvd.nist.gov/vuln/detail/CVE-2022-1473\n[ 8 ] CVE-2022-2097\n https://nvd.nist.gov/vuln/detail/CVE-2022-2097\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-02\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update\nAdvisory ID: RHSA-2022:6156-01\nProduct: RHODF\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6156\nIssue date: 2022-08-24\nCVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528\n CVE-2022-0235 CVE-2022-0536 CVE-2022-0670\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1650\n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24675 CVE-2022-24771\n CVE-2022-24772 CVE-2022-24773 CVE-2022-24785\n CVE-2022-24921 CVE-2022-25313 CVE-2022-25314\n CVE-2022-27774 CVE-2022-27776 CVE-2022-27782\n CVE-2022-28327 CVE-2022-29526 CVE-2022-29810\n CVE-2022-29824 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat\nEnterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* eventsource: Exposure of Sensitive Information (CVE-2022-1650)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n(CVE-2021-23440)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images, which provide numerous bug fixes and enhancements. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. For details on how to apply this\nupdate, refer to: https://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937117 - Deletion of StorageCluster doesn\u0027t remove ceph toolbox pod\n1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified\n1973317 - libceph: read_partial_message and bad crc/signature errors\n1996829 - Permissions assigned to ceph auth principals when using external storage are too broad\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2027724 - Warning log for rook-ceph-toolbox in ocs-operator log\n2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056697 - odf-csi-addons-operator subscription failed while using custom catalog source\n2058211 - Add validation for CIDR field in DRPolicy\n2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced\n2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10\n2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE\n2063691 - [GSS] [RFE] Add termination policy to s3 route\n2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2066514 - OCS operator to install Ceph prometheus alerts instead of Rook\n2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2069314 - OCS external mode should allow specifying names for all Ceph auth principals\n2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. \n2069812 - must-gather: rbd_vol_and_snap_info collection is broken\n2069815 - must-gather: essential rbd mirror command outputs aren\u0027t collected\n2070542 - After creating a new storage system it redirects to 404 error page instead of the \"StorageSystems\" page for OCP 4.11\n2071494 - [DR] Applications are not getting deployed\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty\n2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled\n2075426 - 4.10 must gather is not available after GA of 4.10\n2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in \"Progressing\" state although all the openshift-storage pods are up and Running\n2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost\n2077242 - vg-manager missing permissions\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2079866 - [DR] odf-multicluster-console is in CLBO state\n2079873 - csi-nfsplugin pods are not coming up after successful patch request to update \"ROOK_CSI_ENABLE_NFS\": \"true\"\u0027\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2081680 - Add the LVM Operator into the Storage category in OperatorHub\n2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation\n2082078 - OBC\u0027s not getting created on primary cluster when manageds3 set as \"true\" for mirrorPeer\n2082497 - Do not filter out removable devices\n2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)\n2083441 - LVM operator should deploy the volumesnapshotclass resource\n2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status\n2083993 - Add missing pieces for storageclassclaim\n2084041 - [Console Migration] Link-able storage system name directs to blank page\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided\"\n2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates\n2084546 - [Console Migration] Provider details absent under backing store in UI\n2084565 - [Console Migration] The creation of new backing store , directs to a blank page\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred\n2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace\n2086557 - Thin pool in lvm operator doesn\u0027t use all disks\n2086675 - [UI]No option to \"add capacity\" via the Installed Operators tab\n2086982 - ODF 4.11 deployment is failing\n2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm\n2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2087107 - Set default storage class if none is set\n2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message\n2087675 - ocs-metrics-exporter pod crashes on odf v4.11\n2087732 - [Console Migration] Events page missing under new namespace store\n2087755 - [Console Migration] Bucket Class details page doesn\u0027t have the complete details in UI\n2088359 - Send VG Metrics even if storage is being consumed from thinPool alone\n2088380 - KMS using vault on standalone MCG cluster is not enabled\n2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint\n2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook\n2089296 - [MS v2] Storage cluster in error phase and \u0027ocs-provider-qe\u0027 addon installation failed with ODF 4.10.2\n2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts\n2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. \n2089552 - [MS v2] Cannot create StorageClassClaim\n2089567 - [Console Migration] Improve the styling of Various Components\n2089786 - [Console Migration] \"Attach to deployment\" option is missing in kebab menu for Object Bucket Claims . \n2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. \n2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed\n2090278 - [LVMO] Some containers are missing resource requirements and limits\n2090314 - [LVMO] CSV is missing some useful annotations\n2090953 - [MCO] DRCluster created under default namespace\n2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics\n2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. \n2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference\n2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external\n2091894 - All backingstores in cluster spontaneously change their own secret\n2091951 - [GSS] OCS pods are restarting due to liveness probe failure\n2091998 - Volume Snapshots not work with external restricted mode\n2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool\n2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks\n2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)\n2092349 - Enable zeroing on the thin-pool during creation\n2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase\n2092400 - [MS v2] StorageClassClaim creation is failing with error \"no StorageCluster found\"\n2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically\n2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2094179 - MCO fails to create DRClusters when replication mode is synchronous\n2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . \n2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2095155 - Use tool `black` to format the python external script\n2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster\n2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time\n2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page\n2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened\n2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False\n2096937 - Storage - Data Foundation: i18n misses\n2097216 - Collect StorageClassClaim details in must-gather\n2097287 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2097305 - Add translations for ODF 4.11\n2098121 - Managed ODF not getting detected\n2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled\n2099581 - StorageClassClaim with encryption gets into Failed state\n2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project\n2099646 - Block pool list page kebab action menu is showing empty options\n2099660 - OCS dashbaords not appearing unless user clicks on \"Overview\" Tab\n2099724 - S3 secret namespace on the managed cluster doesn\u0027t match with the namespace in the s3profile\n2099965 - rbd: provide option to disable setting metadata on RBD images\n2100326 - [ODF to ODF] Volume snapshot creation failed\n2100352 - Make lvmo pod labels more uniform\n2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary\n2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install\n2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection\n2103818 - Restored snapshot don\u0027t have any content\n2104833 - Need to update configmap for IBM storage odf operator GA\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23440\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0536\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1650\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24771\nhttps://access.redhat.com/security/cve/CVE-2022-24772\nhttps://access.redhat.com/security/cve/CVE-2022-24773\nhttps://access.redhat.com/security/cve/CVE-2022-24785\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy\nXo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N\nb3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY\nsJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj\ngz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am\n2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf\nmX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl\nCSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE\nhTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY\n3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E\n2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V\nERCapzwcRuQ=gV+z\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity fix:\n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\nBug fixes:\n\n* Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)\n\n* RHACM 2.3.12 images (BZ# 2101411)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation\n2101411 - RHACM 2.3.12 images\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. ==========================================================================\nUbuntu Security Notice USN-5502-1\nJuly 05, 2022\n\nopenssl vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nOpenSSL could be made to expose sensitive information over the network. A remote attacker could possibly use this issue to obtain\nsensitive information. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libssl3 3.0.2-0ubuntu1.6\n\nUbuntu 21.10:\n libssl1.1 1.1.1l-1ubuntu1.6\n\nUbuntu 20.04 LTS:\n libssl1.1 1.1.1f-1ubuntu2.16\n\nUbuntu 18.04 LTS:\n libssl1.1 1.1.1-1ubuntu2.1~18.04.20\n\nAfter a standard system update you need to reboot your computer to make all\nthe necessary changes. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\n* CVE-2022-1705 golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n\n* CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\nBug fixes:\n\n* assisted-service repo pin-latest.py script should allow custom tags to be\npinned (BZ# 2065661)\n\n* assisted-service-build image is too big in size (BZ# 2066059)\n\n* assisted-service pin-latest.py script should exclude the postgres image\n(BZ# 2076901)\n\n* PXE artifacts need to be served via HTTP (BZ# 2078531)\n\n* Implementing new service-agent protocol on agent side (BZ# 2081281)\n\n* RHACM 2.6.0 images (BZ# 2090906)\n\n* Assisted service POD keeps crashing after a bare metal host is created\n(BZ# 2093503)\n\n* Assisted service triggers the worker nodes re-provisioning on the hub\ncluster when the converged flow is enabled (BZ# 2096106)\n\n* Fix assisted CI jobs that fail for cluster-info readiness (BZ# 2097696)\n\n* Nodes are required to have installation disks of at least 120GB instead\nof at minimum of 100GB (BZ# 2099277)\n\n* The pre-selected search keyword is not readable (BZ# 2107736)\n\n* The value of label expressions in the new placement for policy and\npolicysets cannot be shown real-time from UI (BZ# 2111843)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2065661 - assisted-service repo pin-latest.py script should allow custom tags to be pinned\n2066059 - assisted-service-build image is too big in size\n2076901 - assisted-service pin-latest.py script should exclude the postgres image\n2078531 - iPXE artifacts need to be served via HTTP\n2081281 - Implementing new service-agent protocol on agent side\n2090901 - Capital letters in install-config.yaml .platform.baremetal.hosts[].name cause bootkube errors\n2090906 - RHACM 2.6.0 images\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2093503 - Assisted service POD keeps crashing after a bare metal host is created\n2096106 - Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled\n2096445 - Assisted service POD keeps crashing after a bare metal host is created\n2096460 - Spoke BMH stuck \"inspecting\" when deployed via the converged workflow\n2097696 - Fix assisted CI jobs that fail for cluster-info readiness\n2099277 - Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB\n2103703 - Automatic version upgrade triggered for oadp operator installed by cluster-backup-chart\n2104117 - Spoke BMH stuck ?available? after changing a BIOS attribute via the converged workflow\n2104984 - Infrastructure operator missing clusterrole permissions for interacting with mutatingwebhookconfigurations\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2105339 - Search Application button on the Application Table for Subscription applications does not Redirect\n2105357 - [UI] hypershift cluster creation error - n[0] is undefined\n2106347 - Submariner error looking up service account submariner-operator/submariner-addon-sa\n2106882 - Security Context Restrictions are restricting creation of some pods which affects the deployment of some applications\n2107049 - The clusterrole for global clusterset did not created by default\n2107065 - governance-policy-framework in CrashLoopBackOff state on spoke cluster: Failed to start manager {\"error\": \"error listening on :8081: listen tcp :8081: bind: address already in use\"}\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107370 - Helm Release resource recreation feature does not work with the local cluster\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108888 - Hypershift on AWS - control plane not running\n2109370 - The button to create the cluster is not visible\n2111203 - Add ocp 4.11 to filters for discovering clusters in ACM 2.6\n2111218 - Create cluster - Infrastructure page crashes\n2111651 - \"View application\" button on app table for Flux applications redirects to apiVersion=ocp instead of flux\n2111663 - Hosted cluster in Pending import state\n2111671 - Leaked namespaces after deleting hypershift deployment\n2111770 - [ACM 2.6] there is no node info for remote cluster in multiple hubs\n2111843 - The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI\n2112180 - The policy page is crashed after input keywords in the search box\n2112281 - config-policy-controller pod can\u0027t startup in the OCP3.11 managed cluster\n2112318 - Can\u0027t delete the objects which are re-created by policy when deleting the policy\n2112321 - BMAC reconcile loop never stops after changes\n2112426 - No cluster discovered due to x509: certificate signed by unknown authority\n2112478 - Value of delayAfterRunSeconds is not shown on the final submit panel and the word itself should not be wrapped. \n2112793 - Can\u0027t view details of the policy template when set the spec.pruneObjectBehavior as unsupported value\n2112803 - ClusterServiceVersion for release 2.6 branch references \"latest\" tag\n2113787 - [ACM 2.6] can not delete namespaces after detaching the hosted cluster\n2113838 - the cluster proxy-agent was deployed on the non-infra nodes\n2113842 - [ACM 2.6] must restart hosting cluster registration pod if update work-manager-addon cr to change installNamespace\n2114982 - Control plane type shows \u0027Standalone\u0027 for hypershift cluster\n2115622 - Hub fromsecret function doesn\u0027t work for hosted mode in multiple hub\n2115723 - Can\u0027t view details of the policy template for customer and hypershift cluster in hosted mode from UI\n2115993 - Policy automation details panel was not updated after editing the mode back to disabled\n2116211 - Count of violations with unknown status was not accurate when managed clusters have mixed status\n2116329 - cluster-proxy-agent not startup due to the imagepullbackoff on spoke cluster\n2117113 - The proxy-server-host was not correct in cluster-proxy-agent\n2117187 - pruneObjectBehavior radio selection cannot work well and always switch the first one template in multiple configurationPolicy templates\n2117480 - [ACM 2.6] infra-id of HypershiftDeployment doesn\u0027t work\n2118338 - Report the \"namespace not found\" error after clicked view yaml link of a policy in the multiple hub env\n2119326 - Can\u0027t view details of the SecurityContextConstraints policy for managed clusters from UI\n\n5. After the clusters are managed, you can use the APIs that\nare provided by the engine to distribute configuration based on placement\npolicy. Description:\n\nOpenshift Logging Bug Fix Release (5.3.11)\n\nSecurity Fix(es):\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity fix:\n\n* CVE-2022-30629: gatekeeper-container: golang: crypto/tls: session tickets\nlack random ticket_age_add\n\n* CVE-2022-1705: golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962: golang: go/parser: stack exhaustion in all Parse*\nfunctions\n\n* CVE-2022-28131: golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630: golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631: golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632: golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30635: golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\n3. Solution:\n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n* Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n* Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. \n\nRefer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/\nfor additional information. Bugs fixed (https://bugzilla.redhat.com/):\n\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n5. \n\nAdditional details can be found in the upstream advisories at\nhttps://www.openssl.org/news/secadv/20220705.txt and\nhttps://www.openssl.org/news/secadv/20230207.txt\n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 1.1.1n-0+deb11u4. \n\nWe recommend that you upgrade your openssl packages. \n\nFor the detailed security status of openssl please refer to its security\ntracker page at:\nhttps://security-tracker.debian.org/tracker/openssl\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmPivONfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0RBCA/+IqJ9qtjytulO41yPphASSEu22XVN9EYAUsdcpsTmnDtp1zUQSZpQv5qk\n464Z2+0SkNtiHm5O5z5fs4LX0wXYBvLYrFnh2X2Z6rT+YFhXg8ZdEo+IysYSV7gB\nutbb1zbSqUSSLmlF/r6SnXy+HlTyB56p+k0MnLNHejes6DoghebZJGU6Dl5D8Z2J\nwOB6xi2sS3zVl1O+8//PPk5Sha8ESShuP/sBby01Xvpl65+8Icn7dXXHFNUn27rZ\nWdQCdxJaUJiqjZYzI5XAB+zHl8KNDiWP9MqIeT3g+YQ+nzSTeHxRPXDTDvClMv9y\nCJ90PaCY1DBNh5NrE2/IZkpIOKvTjRX3+db7Nab2GyRzLCP7p+1Bm14zHiKRHPOR\nt/6yX11diIF2zvlP/7qeCGkutv9KrFjSW81o1GgJMdt8uduHa95IgKNNUsA6Wf3O\nSkUP4EYfhXs2+TIfEenvqLuAmLsQBCRCvNDdmEGhtR4r0hpvcJ4eOaDBE6FWih1J\ni0mpDIjBYOV2iEUe85XfYflrcFfaxSwbl4ultH3Q3eWtiMwLgXqJ9dKRQEXJX7hp\n48zKPwnftJbGBri9Y293sMjcpv3F/PTjXMh8LcUSVDkVVdQ8cLSmdmP4v4wSzV/q\nZ7KATUs6YAod4ts5u3/zD97Mzk0Xiecw/ggevbCfCvQTByk02Fg=lXE/\n-----END PGP SIGNATURE-----\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2097"
},
{
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"db": "PACKETSTORM",
"id": "168714"
},
{
"db": "PACKETSTORM",
"id": "168150"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "167708"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "170896"
}
],
"trust": 1.89
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-2097",
"trust": 2.1
},
{
"db": "SIEMENS",
"id": "SSA-332410",
"trust": 1.1
},
{
"db": "ICS CERT",
"id": "ICSA-23-017-03",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-2097",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168714",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168150",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167708",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168287",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168347",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168289",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168280",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170896",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"db": "PACKETSTORM",
"id": "168714"
},
{
"db": "PACKETSTORM",
"id": "168150"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "167708"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "170896"
},
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"id": "VAR-202207-0107",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.20766129
},
"last_update_date": "2024-07-23T19:53:59.023000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Amazon Linux 2: ALAS2-2023-1974",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2023-1974"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-2097"
},
{
"title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0"
},
{
"title": "Red Hat: Moderate: openssl security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225818 - security advisory"
},
{
"title": "Red Hat: Moderate: openssl security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226224 - security advisory"
},
{
"title": "Debian Security Advisories: DSA-5343-1 openssl -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b6a11b827fe9cfaea9c113b2ad37856f"
},
{
"title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226517 - security advisory"
},
{
"title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226184 - security advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-147",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-147"
},
{
"title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226422 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226103 - security advisory"
},
{
"title": "Brocade Security Advisories: Access Denied",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=38e06d13217149784c0941a3098b8989"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-195",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-195"
},
{
"title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226188 - security advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226182 - security advisory"
},
{
"title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226051 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226283 - security advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226183 - security advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226507 - security advisory"
},
{
"title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227055 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227058 - security advisory"
},
{
"title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226024 - security advisory"
},
{
"title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226714 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226290 - security advisory"
},
{
"title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226348 - security advisory"
},
{
"title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226345 - security advisory"
},
{
"title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226346 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226430 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226370 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226271 - security advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226696 - security advisory"
},
{
"title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Command Suite, Hitachi Automation Director, Hitachi Configuration Manager, Hitachi Infrastructure Analytics Advisor and Hitachi Ops Center",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2023-126"
},
{
"title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228750 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226526 - security advisory"
},
{
"title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230408 - security advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228889 - security advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228781 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory"
},
{
"title": "https://github.com/jntass/TASSL-1.1.1",
"trust": 0.1,
"url": "https://github.com/jntass/tassl-1.1.1 "
},
{
"title": "BIF - The Fairwinds Base Image Finder Client",
"trust": 0.1,
"url": "https://github.com/fairwindsops/bif "
},
{
"title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories",
"trust": 0.1,
"url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories "
},
{
"title": "GitHub Actions CI App Pipeline",
"trust": 0.1,
"url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc "
},
{
"title": "https://github.com/cdupuis/image-api",
"trust": 0.1,
"url": "https://github.com/cdupuis/image-api "
},
{
"title": "OpenSSL-CVE-lib",
"trust": 0.1,
"url": "https://github.com/chnzzh/openssl-cve-lib "
},
{
"title": "PoC in GitHub",
"trust": 0.1,
"url": "https://github.com/nomi-sec/poc-in-github "
},
{
"title": "PoC in GitHub",
"trust": 0.1,
"url": "https://github.com/manas3c/cve-poc "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2097"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-327",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.2,
"url": "https://www.openssl.org/news/secadv/20220705.txt"
},
{
"trust": 1.2,
"url": "https://security.gentoo.org/glsa/202210-02"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220715-0011/"
},
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2023/dsa-5343"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2023/02/msg00019.html"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20230420-0008/"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=a98f339ddd7e8f487d6e0088d4a9a42324885a93"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=919925673d6c9cfed3c1085497f5dfbbed5fc431"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/v6567jerrhhjw2gngjgkdrnhr7snpzk7/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/r6ck57nbqftpumxapjurcgxuyt76nqak/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 1.0,
"url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-36067"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/327.html"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/al2/alas-2023-1974.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://github.com/fairwindsops/bif"
},
{
"trust": 0.1,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-23-017-03"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/al2022/alas-2022-195.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1968"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1473"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4160"
},
{
"trust": 0.1,
"url": "https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23440"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23440"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6156"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1966"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1966"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6507"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1-1ubuntu2.1~18.04.20"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1f-1ubuntu2.16"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openssl/3.0.2-0ubuntu1.6"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1l-1ubuntu1.6"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5502-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6422"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/multicluster_engine/index#installing-while-connected-online"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-36067"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6182"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6348"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30630"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-4450"
},
{
"trust": 0.1,
"url": "https://www.openssl.org/news/secadv/20230207.txt"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-0215"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/openssl"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-0286"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-4304"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"db": "PACKETSTORM",
"id": "168714"
},
{
"db": "PACKETSTORM",
"id": "168150"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "167708"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "170896"
},
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"db": "PACKETSTORM",
"id": "168714"
},
{
"db": "PACKETSTORM",
"id": "168150"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "167708"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168280"
},
{
"db": "PACKETSTORM",
"id": "170896"
},
{
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-07-05T00:00:00",
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"date": "2022-10-17T13:44:06",
"db": "PACKETSTORM",
"id": "168714"
},
{
"date": "2022-08-25T15:22:18",
"db": "PACKETSTORM",
"id": "168150"
},
{
"date": "2022-09-01T16:30:25",
"db": "PACKETSTORM",
"id": "168213"
},
{
"date": "2022-09-14T15:08:07",
"db": "PACKETSTORM",
"id": "168378"
},
{
"date": "2022-07-06T15:29:36",
"db": "PACKETSTORM",
"id": "167708"
},
{
"date": "2022-09-07T17:07:14",
"db": "PACKETSTORM",
"id": "168287"
},
{
"date": "2022-09-13T15:29:12",
"db": "PACKETSTORM",
"id": "168347"
},
{
"date": "2022-09-07T17:09:04",
"db": "PACKETSTORM",
"id": "168289"
},
{
"date": "2022-09-07T16:53:57",
"db": "PACKETSTORM",
"id": "168280"
},
{
"date": "2023-02-08T15:58:04",
"db": "PACKETSTORM",
"id": "170896"
},
{
"date": "2022-07-05T11:15:08.340000",
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2022-2097"
},
{
"date": "2024-06-21T19:15:23.083000",
"db": "NVD",
"id": "CVE-2022-2097"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "167708"
}
],
"trust": 0.1
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Gentoo Linux Security Advisory 202210-02",
"sources": [
{
"db": "PACKETSTORM",
"id": "168714"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "info disclosure",
"sources": [
{
"db": "PACKETSTORM",
"id": "170896"
}
],
"trust": 0.1
}
}
VAR-202105-0904
Vulnerability from variot - Updated: 2024-07-23 19:50A flaw was found in the Linux kernel in versions before 5.12. The value of internal.ndata, in the KVM API, is mapped to an array index, which can be updated by a user process at anytime which could lead to an out-of-bounds write. The highest threat from this vulnerability is to data integrity and system availability. Linux Kernel Is vulnerable to an out-of-bounds write.Information is tampered with and denial of service (DoS) It may be put into a state. KVM is one of the kernel-based virtual machines. This vulnerability could result in an out-of-bounds write. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.2.4 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.2/html/release_notes/
Security fixes:
-
redisgraph-tls: redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
console-header-container: nodejs-netmask: improper input validation of octal input data (CVE-2021-28092)
-
console-container: nodejs-is-svg: ReDoS via malicious string (CVE-2021-28918)
Bug fixes:
-
RHACM 2.2.4 images (BZ# 1957254)
-
Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7 (BZ#1950832)
-
ACM Operator should support using the default route TLS (BZ# 1955270)
-
The scrolling bar for search filter does not work properly (BZ# 1956852)
-
Limits on Length of MultiClusterObservability Resource Name (BZ# 1959426)
-
The proxy setup in install-config.yaml is not worked when IPI installing with RHACM (BZ# 1960181)
-
Unable to make SSH connection to a Bitbucket server (BZ# 1966513)
-
Observability Thanos store shard crashing - cannot unmarshall DNS message (BZ# 1967890)
-
Bugs fixed (https://bugzilla.redhat.com/):
1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1950832 - Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7 1952150 - [DDF] It would be great to see all the options available for the bucket configuration and which attributes are mandatory 1954506 - [DDF] Table does not contain data about 20 clusters. Now it's difficult to estimate CPU usage with larger clusters 1954535 - Reinstall Submariner - No endpoints found on one cluster 1955270 - ACM Operator should support using the default route TLS 1956852 - The scrolling bar for search filter does not work properly 1957254 - RHACM 2.2.4 images 1959426 - Limits on Length of MultiClusterObservability Resource Name 1960181 - The proxy setup in install-config.yaml is not worked when IPI installing with RHACM. 1963128 - [DDF] Please rename this to "Amazon Elastic Kubernetes Service" 1966513 - Unable to make SSH connection to a Bitbucket server 1967357 - [DDF] When I clicked on this yaml, I get a HTTP 404 error. 1967890 - Observability Thanos store shard crashing - cannot unmarshal DNS message
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.16. See the following advisories for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2287
Space precludes documenting all of the container images in this advisory.
Additional Changes:
This update also fixes several bugs. Documentation for these changes is available from the Release Notes document linked to in the References section. Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1889659 - [Assisted-4.6] [cluster validation] Number of hosts validation is not enforced when Automatic role assigned 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1932638 - Removing ssh keys MC does not remove the key from authorized_keys 1934180 - vsphere-problem-detector should check if datastore is part of datastore cluster 1937396 - when kuryr quotas are unlimited, we should not sent alerts 1939014 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1939553 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1940275 - [IPI Baremetal] Revert Sending full ignition to masters 1942603 - [4.7z] Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1944046 - Warn users when using an unsupported browser such as IE 1944575 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1945702 - Operator dependency not consistently chosen from default channel 1946682 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1947091 - Incorrect skipped status for conditional tasks in the pipeline run 1947427 - Bootstrap ignition shim doesn't follow proxy settings 1948398 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1949541 - Kuryr-Controller crashes when it's missing the status object 1950290 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1951210 - Pod log filename no longer in -.log format 1953475 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1954121 - [ceo] [release-4.7] Operator goes degraded when a second internal node ip is added after install 1955210 - OCP 4.6 Build fails when filename contains an umlaut 1955418 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955482 - [4.7] Drop high-cardinality metrics from kube-state-metrics which aren't used 1955600 - e2e unidling test flakes in CI 1956565 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1956980 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1957308 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1957499 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1958416 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1958467 - [4.7] Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1958873 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1959546 - [4.7] storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1959737 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1960093 - Console not works well against a proxy in front of openshift clusters 1960111 - Port 8080 of oVirt CSI driver is causing collisions with other services 1960542 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960544 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1960562 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960589 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960645 - [Backport 4.7] Add virt_platform metric to the collected metrics 1960686 - GlobalConfigPage is constantly requesting resources 1961069 - CMO end-to-end tests work only on AWS 1961367 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1961518 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1961557 - [release-4.7] respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961719 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961887 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1962314 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1962493 - Kebab menu of taskrun contains Edit options which should not be present 1962637 - Nodes tainted after configuring additional host iface 1962819 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1962949 - e2e-metal-ipi and related jobs fail to bootstrap due to multipe VIP's 1963141 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1963243 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1964322 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1964568 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1965075 - [4.7z] After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1965932 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1966358 - Build failure on s390x 1966798 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966810 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1967328 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1967966 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967972 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1970322 - [OVN]EgressFirewall doesn't work well as expected
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: Red Hat Virtualization Host security update [ovirt-4.4.6] Advisory ID: RHSA-2021:2522-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2021:2522 Issue date: 2021-06-22 CVE Names: CVE-2020-24489 CVE-2021-3501 CVE-2021-3560 CVE-2021-27219 =====================================================================
- Summary:
An update for imgbased, redhat-release-virtualization-host, and redhat-virtualization-host is now available for Red Hat Virtualization 4 for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL 8-based RHEV-H for RHEV 4 (build requirements) - noarch, x86_64 Red Hat Virtualization 4 Hypervisor for RHEL 8 - x86_64
- Description:
The redhat-virtualization-host packages provide the Red Hat Virtualization Host. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The redhat-virtualization-host packages provide the Red Hat Virtualization Host. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The ovirt-node-ng packages provide the Red Hat Virtualization Host. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
Security Fix(es):
-
glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits (CVE-2021-27219)
-
kernel: userspace applications can misuse the KVM API to cause a write of 16 bytes at an offset up to 32 GB from vcpu->run (CVE-2021-3501)
-
polkit: local privilege escalation using polkit_system_bus_name_get_creds_sync() (CVE-2021-3560)
-
hw: vt-d related privilege escalation (CVE-2020-24489)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Previously, systemtap dependencies were not included in the RHV-H channel. Therefore, systemtap could not be installed. In this release, the systemtap dependencies have been included in the channel, resolving the issue. (BZ#1903997)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1903997 - Provide systemtap dependencies within RHV-H channel 1929858 - CVE-2021-27219 glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits 1950136 - CVE-2021-3501 kernel: userspace applications can misuse the KVM API to cause a write of 16 bytes at an offset up to 32 GB from vcpu->run 1961710 - CVE-2021-3560 polkit: local privilege escalation using polkit_system_bus_name_get_creds_sync() 1962650 - CVE-2020-24489 hw: vt-d related privilege escalation
- Package List:
Red Hat Virtualization 4 Hypervisor for RHEL 8:
Source: redhat-virtualization-host-4.4.6-20210615.0.el8_4.src.rpm
x86_64: redhat-virtualization-host-image-update-4.4.6-20210615.0.el8_4.x86_64.rpm
RHEL 8-based RHEV-H for RHEV 4 (build requirements):
Source: redhat-release-virtualization-host-4.4.6-2.el8ev.src.rpm
noarch: redhat-virtualization-host-image-update-placeholder-4.4.6-2.el8ev.noarch.rpm
x86_64: redhat-release-virtualization-host-4.4.6-2.el8ev.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-24489 https://access.redhat.com/security/cve/CVE-2021-3501 https://access.redhat.com/security/cve/CVE-2021-3560 https://access.redhat.com/security/cve/CVE-2021-27219 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYNH6EtzjgjWX9erEAQg8rBAApw3Jn/EPQosAw8RDA053A4aCxO2gHC15 HK1kJ2gSn73kahmvvl3ZAFQW3Wa/OKZRFnbOKZPcJvKeVKnmeHdjmX6V/wNC/bAO i2bc69+GYd+mj3+ngKmTyFFVSsgDWCfFv6lwMl74d0dXYauCfMTiMD/K/06zaQ3b arTdExk9VynIcr19ggOfhGWAe5qX8ZXfPHwRAmDBNZCUjzWm+c+O+gQQiy/wWzMB 6vbtEqKeXfT1XgxjdQO5xfQ4Fvd8ssKXwOjdymCsEoejplVFmO3reBrl+y95P3p9 BCKR6/cWKzhaAXfS8jOlZJvxA0TyxK5+HOP8pGWGfxBixXVbaFR4E/+rnA1E04jp lGXvby0yq1Q3u4/dYKPn7oai1H7b7TOaCKrmTMy3Nwd5mKiT+CqYk2Va0r2+Cy/2 jH6CeaSKJIBFviUalmc7ZbdPR1zfa1LEujaYp8aCez8pNF0Mopf5ThlCwlZdEdxG aTK1VPajNj2i8oveRPgNAzIu7tMh5Cibyo92nkfjhV9ube7WLg4fBKbX/ZfCBS9y osA4oRWUFbJYnHK6Fbr1X3mIYIq0s2y0MO2QZWj8hvzMT+BcQy5byreU4Y6o8ikl hXz6yl7Cu6X7wm32QZNZMWbUwJfksJRBR+dfkhDcGV0/zQpMZpwHDXs06kal9vsY DRQj4fNuEQo= =bDgd -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Bug Fix(es):
-
kernel-rt: update RT source tree to the RHEL-8.4.z0 source tree (BZ#1957489)
-
Description:
This is a kernel live patch module which is automatically loaded by the RPM post-install script to modify the code of a running kernel. 8) - aarch64, noarch, ppc64le, s390x, x86_64
Bug Fix(es):
-
OVS mistakenly using local IP as tun_dst for VXLAN packets (?) (BZ#1944667)
-
Selinux: The task calling security_set_bools() deadlocks with itself when it later calls selinux_audit_rule_match(). (BZ#1945123)
-
[mlx5] tc flower mpls match options does not work (BZ#1952061)
-
mlx5: missing patches for ct.rel (BZ#1952062)
-
CT HWOL: with OVN/OVS, intermittently, load balancer hairpin TCP packets get dropped for seconds in a row (BZ#1952065)
-
[Lenovo 8.3 bug] Blackscreen after clicking on "Settings" icon from top-right corner. (BZ#1952900)
-
RHEL 8.x missing uio upstream fix. (BZ#1952952)
-
Turbostat doesn't show any measured data on AMD Milan (BZ#1952987)
-
P620 no sound from front headset jack (BZ#1954545)
-
RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps. (BZ#1955188)
-
[net/sched] connection failed with DNAT + SNAT by tc action ct (BZ#1956458)
-
========================================================================== Ubuntu Security Notice USN-4983-1 June 03, 2021
linux-oem-5.10 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33200)
Piotr Krysiuk and Benedict Schlueter discovered that the eBPF implementation in the Linux kernel performed out of bounds speculation on pointer arithmetic. A local attacker could use this to expose sensitive information. (CVE-2021-29155)
Piotr Krysiuk discovered that the eBPF implementation in the Linux kernel did not properly prevent speculative loads in certain situations. A local attacker could use this to expose sensitive information (kernel memory). A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-3501)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.10.0-1029-oem 5.10.0-1029.30 linux-image-oem-20.04 5.10.0.1029.30 linux-image-oem-20.04b 5.10.0.1029.30
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202105-0904",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "virtualization host",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "red hat enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "5.12",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv:8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:redhat:virtualization_host:4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:redhat:virtualization:4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
}
],
"trust": 0.6
},
"cve": "CVE-2021-3501",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 3.6,
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 4.9,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "LOW",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:L/AC:L/Au:N/C:N/I:P/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Local",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Partial",
"baseScore": 3.6,
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2021-3501",
"impactScore": null,
"integrityImpact": "Partial",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Low",
"trust": 0.9,
"userInteractionRequired": null,
"vectorString": "AV:L/AC:L/Au:N/C:N/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 3.6,
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "VHN-391161",
"impactScore": 4.9,
"integrityImpact": "PARTIAL",
"severity": "LOW",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:N/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.1,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 1.8,
"impactScore": 5.2,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.1,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2021-3501",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2021-3501",
"trust": 1.8,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202105-271",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-391161",
"trust": 0.1,
"value": "LOW"
},
{
"author": "VULMON",
"id": "CVE-2021-3501",
"trust": 0.1,
"value": "LOW"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "A flaw was found in the Linux kernel in versions before 5.12. The value of internal.ndata, in the KVM API, is mapped to an array index, which can be updated by a user process at anytime which could lead to an out-of-bounds write. The highest threat from this vulnerability is to data integrity and system availability. Linux Kernel Is vulnerable to an out-of-bounds write.Information is tampered with and denial of service (DoS) It may be put into a state. KVM is one of the kernel-based virtual machines. This vulnerability could result in an out-of-bounds write. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.2.4 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability\nengineers face as they work across a range of public and private cloud\nenvironments. \nClusters and applications are all visible and managed from a single\nconsole\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor\nthis release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.2/html/release_notes/\n\nSecurity fixes:\n\n* redisgraph-tls: redis: integer overflow when configurable limit for\nmaximum supported bulk input size is too big on 32-bit platforms\n(CVE-2021-21309)\n\n* console-header-container: nodejs-netmask: improper input validation of\noctal input data (CVE-2021-28092)\n\n* console-container: nodejs-is-svg: ReDoS via malicious string\n(CVE-2021-28918)\n\nBug fixes: \n\n* RHACM 2.2.4 images (BZ# 1957254)\n\n* Enabling observability for OpenShift Container Storage with RHACM 2.2 on\nOCP 4.7 (BZ#1950832)\n\n* ACM Operator should support using the default route TLS (BZ# 1955270)\n\n* The scrolling bar for search filter does not work properly (BZ# 1956852)\n\n* Limits on Length of MultiClusterObservability Resource Name (BZ# 1959426)\n\n* The proxy setup in install-config.yaml is not worked when IPI installing\nwith RHACM (BZ# 1960181)\n\n* Unable to make SSH connection to a Bitbucket server (BZ# 1966513)\n\n* Observability Thanos store shard crashing - cannot unmarshall DNS message\n(BZ# 1967890)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1950832 - Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7\n1952150 - [DDF] It would be great to see all the options available for the bucket configuration and which attributes are mandatory\n1954506 - [DDF] Table does not contain data about 20 clusters. Now it\u0027s difficult to estimate CPU usage with larger clusters\n1954535 - Reinstall Submariner - No endpoints found on one cluster\n1955270 - ACM Operator should support using the default route TLS\n1956852 - The scrolling bar for search filter does not work properly\n1957254 - RHACM 2.2.4 images\n1959426 - Limits on Length of MultiClusterObservability Resource Name\n1960181 - The proxy setup in install-config.yaml is not worked when IPI installing with RHACM. \n1963128 - [DDF] Please rename this to \"Amazon Elastic Kubernetes Service\"\n1966513 - Unable to make SSH connection to a Bitbucket server\n1967357 - [DDF] When I clicked on this yaml, I get a HTTP 404 error. \n1967890 - Observability Thanos store shard crashing - cannot unmarshal DNS message\n\n5. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.16. See the following advisories for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2287\n\nSpace precludes documenting all of the container images in this advisory. \n\nAdditional Changes:\n\nThis update also fixes several bugs. Documentation for these changes is\navailable from the Release Notes document linked to in the References\nsection. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1889659 - [Assisted-4.6] [cluster validation] Number of hosts validation is not enforced when Automatic role assigned\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1932638 - Removing ssh keys MC does not remove the key from authorized_keys\n1934180 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1937396 - when kuryr quotas are unlimited, we should not sent alerts\n1939014 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1939553 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1940275 - [IPI Baremetal] Revert Sending full ignition to masters\n1942603 - [4.7z] Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1944046 - Warn users when using an unsupported browser such as IE\n1944575 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1945702 - Operator dependency not consistently chosen from default channel\n1946682 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1947091 - Incorrect skipped status for conditional tasks in the pipeline run\n1947427 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1948398 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1949541 - Kuryr-Controller crashes when it\u0027s missing the status object\n1950290 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1951210 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1953475 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1954121 - [ceo] [release-4.7] Operator goes degraded when a second internal node ip is added after install\n1955210 - OCP 4.6 Build fails when filename contains an umlaut\n1955418 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955482 - [4.7] Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955600 - e2e unidling test flakes in CI\n1956565 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1956980 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1957308 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1957499 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1958416 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1958467 - [4.7] Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1958873 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1959546 - [4.7] storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1959737 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1960093 - Console not works well against a proxy in front of openshift clusters\n1960111 - Port 8080 of oVirt CSI driver is causing collisions with other services\n1960542 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960544 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1960562 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960589 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960645 - [Backport 4.7] Add virt_platform metric to the collected metrics\n1960686 - GlobalConfigPage is constantly requesting resources\n1961069 - CMO end-to-end tests work only on AWS\n1961367 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1961518 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1961557 - [release-4.7] respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961719 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961887 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1962314 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1962493 - Kebab menu of taskrun contains Edit options which should not be present\n1962637 - Nodes tainted after configuring additional host iface\n1962819 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1962949 - e2e-metal-ipi and related jobs fail to bootstrap due to multipe VIP\u0027s\n1963141 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1963243 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1964322 - UI, The status of \"Used Capacity Breakdown [Pods]\" is \"Not available\"\n1964568 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1965075 - [4.7z] After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1965932 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1966358 - Build failure on s390x\n1966798 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966810 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1967328 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1967966 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967972 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1970322 - [OVN]EgressFirewall doesn\u0027t work well as expected\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: Red Hat Virtualization Host security update [ovirt-4.4.6]\nAdvisory ID: RHSA-2021:2522-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2522\nIssue date: 2021-06-22\nCVE Names: CVE-2020-24489 CVE-2021-3501 CVE-2021-3560 \n CVE-2021-27219 \n=====================================================================\n\n1. Summary:\n\nAn update for imgbased, redhat-release-virtualization-host, and\nredhat-virtualization-host is now available for Red Hat Virtualization 4\nfor Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL 8-based RHEV-H for RHEV 4 (build requirements) - noarch, x86_64\nRed Hat Virtualization 4 Hypervisor for RHEL 8 - x86_64\n\n3. Description:\n\nThe redhat-virtualization-host packages provide the Red Hat Virtualization\nHost. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are\ninstalled using a special build of Red Hat Enterprise Linux with only the\npackages required to host virtual machines. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe redhat-virtualization-host packages provide the Red Hat Virtualization\nHost. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are\ninstalled using a special build of Red Hat Enterprise Linux with only the\npackages required to host virtual machines. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe ovirt-node-ng packages provide the Red Hat Virtualization Host. These\npackages include redhat-release-virtualization-host, ovirt-node, and\nrhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a\nspecial build of Red Hat Enterprise Linux with only the packages required\nto host virtual machines. RHVH features a Cockpit user interface for\nmonitoring the host\u0027s resources and performing administrative tasks. \n\nSecurity Fix(es):\n\n* glib: integer overflow in g_bytes_new function on 64-bit platforms due to\nan implicit cast from 64 bits to 32 bits (CVE-2021-27219)\n\n* kernel: userspace applications can misuse the KVM API to cause a write of\n16 bytes at an offset up to 32 GB from vcpu-\u003erun (CVE-2021-3501)\n\n* polkit: local privilege escalation using\npolkit_system_bus_name_get_creds_sync() (CVE-2021-3560)\n\n* hw: vt-d related privilege escalation (CVE-2020-24489)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\nBug Fix(es):\n\n* Previously, systemtap dependencies were not included in the RHV-H\nchannel. Therefore, systemtap could not be installed. \nIn this release, the systemtap dependencies have been included in the\nchannel, resolving the issue. (BZ#1903997)\n\n4. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1903997 - Provide systemtap dependencies within RHV-H channel\n1929858 - CVE-2021-27219 glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits\n1950136 - CVE-2021-3501 kernel: userspace applications can misuse the KVM API to cause a write of 16 bytes at an offset up to 32 GB from vcpu-\u003erun\n1961710 - CVE-2021-3560 polkit: local privilege escalation using polkit_system_bus_name_get_creds_sync()\n1962650 - CVE-2020-24489 hw: vt-d related privilege escalation\n\n6. Package List:\n\nRed Hat Virtualization 4 Hypervisor for RHEL 8:\n\nSource:\nredhat-virtualization-host-4.4.6-20210615.0.el8_4.src.rpm\n\nx86_64:\nredhat-virtualization-host-image-update-4.4.6-20210615.0.el8_4.x86_64.rpm\n\nRHEL 8-based RHEV-H for RHEV 4 (build requirements):\n\nSource:\nredhat-release-virtualization-host-4.4.6-2.el8ev.src.rpm\n\nnoarch:\nredhat-virtualization-host-image-update-placeholder-4.4.6-2.el8ev.noarch.rpm\n\nx86_64:\nredhat-release-virtualization-host-4.4.6-2.el8ev.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-24489\nhttps://access.redhat.com/security/cve/CVE-2021-3501\nhttps://access.redhat.com/security/cve/CVE-2021-3560\nhttps://access.redhat.com/security/cve/CVE-2021-27219\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYNH6EtzjgjWX9erEAQg8rBAApw3Jn/EPQosAw8RDA053A4aCxO2gHC15\nHK1kJ2gSn73kahmvvl3ZAFQW3Wa/OKZRFnbOKZPcJvKeVKnmeHdjmX6V/wNC/bAO\ni2bc69+GYd+mj3+ngKmTyFFVSsgDWCfFv6lwMl74d0dXYauCfMTiMD/K/06zaQ3b\narTdExk9VynIcr19ggOfhGWAe5qX8ZXfPHwRAmDBNZCUjzWm+c+O+gQQiy/wWzMB\n6vbtEqKeXfT1XgxjdQO5xfQ4Fvd8ssKXwOjdymCsEoejplVFmO3reBrl+y95P3p9\nBCKR6/cWKzhaAXfS8jOlZJvxA0TyxK5+HOP8pGWGfxBixXVbaFR4E/+rnA1E04jp\nlGXvby0yq1Q3u4/dYKPn7oai1H7b7TOaCKrmTMy3Nwd5mKiT+CqYk2Va0r2+Cy/2\njH6CeaSKJIBFviUalmc7ZbdPR1zfa1LEujaYp8aCez8pNF0Mopf5ThlCwlZdEdxG\naTK1VPajNj2i8oveRPgNAzIu7tMh5Cibyo92nkfjhV9ube7WLg4fBKbX/ZfCBS9y\nosA4oRWUFbJYnHK6Fbr1X3mIYIq0s2y0MO2QZWj8hvzMT+BcQy5byreU4Y6o8ikl\nhXz6yl7Cu6X7wm32QZNZMWbUwJfksJRBR+dfkhDcGV0/zQpMZpwHDXs06kal9vsY\nDRQj4fNuEQo=\n=bDgd\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the RHEL-8.4.z0 source tree\n(BZ#1957489)\n\n4. Description:\n\nThis is a kernel live patch module which is automatically loaded by the RPM\npost-install script to modify the code of a running kernel. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nBug Fix(es):\n\n* OVS mistakenly using local IP as tun_dst for VXLAN packets (?)\n(BZ#1944667)\n\n* Selinux: The task calling security_set_bools() deadlocks with itself when\nit later calls selinux_audit_rule_match(). (BZ#1945123)\n\n* [mlx5] tc flower mpls match options does not work (BZ#1952061)\n\n* mlx5: missing patches for ct.rel (BZ#1952062)\n\n* CT HWOL: with OVN/OVS, intermittently, load balancer hairpin TCP packets\nget dropped for seconds in a row (BZ#1952065)\n\n* [Lenovo 8.3 bug] Blackscreen after clicking on \"Settings\" icon from\ntop-right corner. (BZ#1952900)\n\n* RHEL 8.x missing uio upstream fix. (BZ#1952952)\n\n* Turbostat doesn\u0027t show any measured data on AMD Milan (BZ#1952987)\n\n* P620 no sound from front headset jack (BZ#1954545)\n\n* RHEL kernel 8.2 and higher are affected by data corruption bug in raid1\narrays using bitmaps. (BZ#1955188)\n\n* [net/sched] connection failed with DNAT + SNAT by tc action ct\n(BZ#1956458)\n\n4. ==========================================================================\nUbuntu Security Notice USN-4983-1\nJune 03, 2021\n\nlinux-oem-5.10 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-33200)\n\nPiotr Krysiuk and Benedict Schlueter discovered that the eBPF\nimplementation in the Linux kernel performed out of bounds speculation on\npointer arithmetic. A local attacker could use this to expose sensitive\ninformation. (CVE-2021-29155)\n\nPiotr Krysiuk discovered that the eBPF implementation in the Linux kernel\ndid not properly prevent speculative loads in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-3501)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.10.0-1029-oem 5.10.0-1029.30\n linux-image-oem-20.04 5.10.0.1029.30\n linux-image-oem-20.04b 5.10.0.1029.30\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
}
],
"trust": 2.43
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2021-3501",
"trust": 4.1
},
{
"db": "PACKETSTORM",
"id": "162977",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "163149",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "162881",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "162936",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.1945",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1919",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1868",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2131",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "162890",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "162882",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "163242",
"trust": 0.2
},
{
"db": "VULHUB",
"id": "VHN-391161",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-3501",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163188",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"id": "VAR-202105-0904",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T19:50:13.905000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Bug\u00a01950136",
"trust": 0.8,
"url": "http://www.kernel.org"
},
{
"title": "Linux kernel Buffer error vulnerability fix",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=150809"
},
{
"title": "Red Hat: CVE-2021-3501",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-3501"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-3501 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
},
{
"problemtype": "Out-of-bounds writing (CWE-787) [ Other ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.8,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1950136"
},
{
"trust": 1.8,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=04c4f2ee3f68c9a4bf1653d15f1a9a435ae33f7a"
},
{
"trust": 1.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3501"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20210618-0008/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3501"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162977/ubuntu-security-notice-usn-4983-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2131"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1919"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163149/red-hat-security-advisory-2021-2286-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-memory-corruption-via-kvm-35276"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162936/ubuntu-security-notice-usn-4977-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162881/red-hat-security-advisory-2021-2169-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1868"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1945"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3543"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-27219"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3543"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/787.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8286"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21639"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12364"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28165"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28092"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24977"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28935"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28163"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8285"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21309"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21640"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27170"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25692"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-2708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23336"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-2433"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3347"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3114"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12364"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2461"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8284"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2286"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2287"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24489"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24489"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2165"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2168"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29155"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31829"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33200"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem-5.10/5.10.0-1029.30"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4983-1"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2021-05-06T00:00:00",
"db": "VULHUB",
"id": "VHN-391161"
},
{
"date": "2021-05-06T00:00:00",
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"date": "2022-01-13T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"date": "2021-06-17T17:53:22",
"db": "PACKETSTORM",
"id": "163188"
},
{
"date": "2021-06-15T14:59:25",
"db": "PACKETSTORM",
"id": "163149"
},
{
"date": "2021-06-22T19:34:25",
"db": "PACKETSTORM",
"id": "163242"
},
{
"date": "2021-06-01T15:03:46",
"db": "PACKETSTORM",
"id": "162881"
},
{
"date": "2021-06-01T15:04:05",
"db": "PACKETSTORM",
"id": "162882"
},
{
"date": "2021-06-01T15:11:57",
"db": "PACKETSTORM",
"id": "162890"
},
{
"date": "2021-06-04T13:47:07",
"db": "PACKETSTORM",
"id": "162977"
},
{
"date": "2021-05-06T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"date": "2021-05-06T13:15:12.840000",
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-05-13T00:00:00",
"db": "VULHUB",
"id": "VHN-391161"
},
{
"date": "2021-05-14T00:00:00",
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"date": "2022-01-13T08:56:00",
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"date": "2021-06-17T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"date": "2022-05-13T20:52:55.127000",
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Linux\u00a0Kernel\u00a0 Out-of-bounds Vulnerability in Microsoft",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "buffer error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
}
],
"trust": 0.6
}
}
VAR-202206-1428
Vulnerability from variot - Updated: 2024-07-23 19:47In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.
For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.
This advisory contains bug fixes and enhancements to the Submariner container images. Description:
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index
All users of Red Hat Ceph Storage are advised to pull these new images from the Red Hat Ecosystem catalog, which provides numerous enhancements and bug fixes. Bugs fixed (https://bugzilla.redhat.com/):
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2115198 - build ceph containers for RHCS 5.2 release
- Summary:
OpenShift API for Data Protection (OADP) 1.1.0 is now available. Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- JIRA issues fixed (https://issues.jboss.org/):
OADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig OADP-154 - Ensure support for backing up resources based on different label selectors OADP-194 - Remove the registry dependency from OADP OADP-199 - Enable support for restore of existing resources OADP-224 - Restore silently ignore resources if they exist - restore log not updated OADP-225 - Restore doesn't update velero.io/backup-name when a resource is updated OADP-234 - Implementation of incremental restore OADP-324 - Add label to Expired backups failing garbage collection OADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases OADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it's unable to find the zone OADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete OADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot OADP-528 - The volumesnapshotcontent is not removed for the synced backup OADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10 OADP-538 - typo on noDefaultBackupLocation error on DPA CR OADP-552 - Validate OADP with 4.11 and Pod Security Admissions OADP-558 - Empty Failed Backup CRs can't be removed OADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version OADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly OADP-592 - OADP must-gather add support for insecure tls OADP-597 - BSL validation logs OADP-598 - Data mover performance on backup blocks backup process OADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl OADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled OADP-602 - Support GCP for openshift-velero-plugin registry OADP-605 - [OCP 4.11] CSI restore fails with admission webhook \"volumesnapshotclasses.snapshot.storage.k8s.io\" denied OADP-607 - DataMover: VSB is stuck on SnapshotBackupDone OADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace OADP-613 - DataMover: upstream documentation refers wrong CRs OADP-637 - Restic backup fails with CA certificate OADP-643 - [Data Mover] VSB and VSR names are not unique OADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable OADP-648 - Remove default limits for velero and restic pods OADP-652 - Data mover VolSync pod errors with Noobaa OADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace OADP-660 - Data mover restic secret does not support Azure OADP-698 - DataMover: volume-snapshot-mover pod points to upstream image OADP-715 - Restic restore fails: restic-wait container continuously fails with "Not found: /restores//.velero/" OADP-716 - Incremental restore: second restore of a namespace partially fails OADP-736 - Data mover VSB always fails with volsync 0.5
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
- moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
- vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fixes:
-
Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters (BZ# 2074547)
-
OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constrain (BZ# 2082254)
-
subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec (BZ# 2083659)
-
Yaml editor for creating vSphere cluster moves to next line after typing (BZ# 2086883)
-
Submariner addon status doesn't track all deployment failures (BZ# 2090311)
-
Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret (BZ# 2091170)
-
After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors (BZ# 2095481)
-
Enforce failed and report the violation after modified memory value in limitrange policy (BZ# 2100036)
-
Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" (BZ# 2101577)
-
Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies (BZ# 2102273)
-
managed cluster is in "unknown" state for 120 mins after OADP restore
-
RHACM 2.5.2 images (BZ# 2104553)
-
Subscription UI does not allow binding to label with empty value (BZ# 2104961)
-
Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ# 2106069)
-
Region information is not available for Azure cloud in managedcluster CR (BZ# 2107134)
-
cluster uninstall log points to incorrect container name (BZ# 2107359)
-
ACM shows wrong path for Argo CD applicationset git generator (BZ# 2107885)
-
Single node checkbox not visible for 4.11 images (BZ# 2109134)
-
Unable to deploy hypershift cluster when enabling validate-cluster-security (BZ# 2109544)
-
Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application (BZ# 20110026)
-
After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating (BZ# 2117728)
-
pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)
-
ArgoCD and AppSet Applications do not deploy to local-cluster (BZ# 2124707)
-
Bugs fixed (https://bugzilla.redhat.com/):
2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters 2082254 - OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint 2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec 2086883 - Yaml editor for creating vSphere cluster moves to next line after typing 2090311 - Submariner addon status doesn't track all deployment failures 2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret 2095481 - After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors 2100036 - Enforce failed and report the violation after modified memory value in limitrange policy 2101577 - Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" 2102273 - Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies 2103653 - managed cluster is in "unknown" state for 120 mins after OADP restore 2104553 - RHACM 2.5.2 images 2104961 - Subscription UI does not allow binding to label with empty value 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD 2107134 - Region information is not available for Azure cloud in managedcluster CR 2107359 - cluster uninstall log points to incorrect container name 2107885 - ACM shows wrong path for Argo CD applicationset git generator 2109134 - Single node checkbox not visible for 4.11 images 2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application 2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating 2122292 - pods in CrashLoopBackoff on 3.11 managed cluster 2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- Bugs fixed (https://bugzilla.redhat.com/):
2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM 2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR 2091856 - ?Edit BootSource? action should have more explicit information when disabled 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2102694 - Fedora version in DataImportCrons is not 'latest' 2109407 - [4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted 2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls 2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based 2115371 - Unable to start windows VMs on PSI setups 2119613 - GiB changes to B in Template's Edit boot source reference modal 2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass 2128872 - [4.11]Can't restore cloned VM 2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 2129013 - Mark Windows 11 as TechPreview 2129235 - [RFE] Add "Copy SSH command" to VM action list 2134668 - Cannot edit ssh even vm is stopped 2139453 - 4.11.1 rpms
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays 2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster LOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch LOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn't support multiple CAs LOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. LOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. LOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value LOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed LOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue LOG-3310 - [release-5.5] Can't choose correct CA ConfigMap Key when creating lokistack in Console LOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update Advisory ID: RHSA-2022:8917-01 Product: Red Hat JBoss Web Server Advisory URL: https://access.redhat.com/errata/RHSA-2022:8917 Issue date: 2022-12-12 CVE Names: CVE-2022-1292 CVE-2022-2068 ==================================================================== 1. Summary:
An update is now available for Red Hat JBoss Web Server 5.7.1 on Red Hat Enterprise Linux versions 7, 8, and 9.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat JBoss Web Server 5.7 for RHEL 7 Server - x86_64 Red Hat JBoss Web Server 5.7 for RHEL 8 - x86_64 Red Hat JBoss Web Server 5.7 for RHEL 9 - x86_64
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library.
This release of Red Hat JBoss Web Server 5.7.1 serves as a replacement for Red Hat JBoss Web Server 5.7.0. This release includes bug fixes, enhancements and component upgrades, which are documented in the Release Notes, linked to in the References.
Security Fix(es):
-
openssl: c_rehash script allows command injection (CVE-2022-1292)
-
openssl: the c_rehash script allows command injection (CVE-2022-2068)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat JBoss Web Server 5.7 for RHEL 7 Server:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el7jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el7jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el7jws.x86_64.rpm
Red Hat JBoss Web Server 5.7 for RHEL 8:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el8jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el8jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el8jws.x86_64.rpm
Red Hat JBoss Web Server 5.7 for RHEL 9:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el9jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el9jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el9jws.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY5dYDtzjgjWX9erEAQihfg/+JKRn1ponld/PXWb0JyTUZp2RsgqRlaoi dFWK8JVr3iIzA8pVUqiy+9fYqvRLvRNv8iyPezTFvlfi70FDLXd58QjxQd2zIcI2 tvwFp3mFYfqT3iEz3PdvhiDpPx9XVeSuXgl8CglshJc4ARkLtdIJzkB6xoWl3fe0 myZzwJChpWzOYvZWZVzPRNzsuAi75pc/y8GwVh+fIlw3iySiskkspGVksXBmoBup XIM0O9ICMJ4jUbNTEZ0AwM6yZX1603sdvW60UarBVjf48vIM8x2ef6h84xEMB/3J eLaUlm5Gm68CQx3Sf+ImCCmYcJ2LmX3KnBMGUhBiQGh2SlEJPKijlrHAhLX7M1YG /yvgd8plwRCAsYTlAJyhcXpBovNtP9io+S4kNy/j/HswvuUcJ+mrJNfZq6AwRnoF cNf2h1+Nl8VlT5YXkbZ0vRW1VbY7L4G1BCiqG2VGdjuOuynXh2URHsdKgs9zHY+5 OMaV16fDbH23t04So+b4hxTsfelUUWEqyKk3qvZESNoFmWPCbaBpzDlawSGEFp5g Ly0SN2cW39creXZ3uYioyMnHKeviSDGX8ik40c7mMYYaGnbgP1mPR8FWu9C3EoWi 0LV3EDSHyFKFxUahjGzKKmjDQtYXPAt9Ci1Vp0OQFhKtAecfmlRZJEZRL4JCgKUd vabHaw7IH20=YAuF -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202206-1428",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "sannav",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"model": "santricity smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "fas 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "aff a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h615c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "1.1.1p"
},
{
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h610c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "3.0.4"
},
{
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "1.0.2zf"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "1.0.2"
},
{
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "3.0.0"
},
{
"model": "aff 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "aff 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "1.1.1"
},
{
"model": "fas 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "sinec ins",
"scope": "eq",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"model": "fas a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "sinec ins",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0.4",
"versionStartIncluding": "3.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.1.1p",
"versionStartIncluding": "1.1.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0.2zf",
"versionStartIncluding": "1.0.2",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:hyper-v:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_antivirus_connector:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h615c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h615c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h610c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h610c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas_8300_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas_8300:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas_8700_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas_8700:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:fas_a400_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:fas_a400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:aff_8300_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:aff_8300:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:aff_8700_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:aff_8700:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:aff_a400_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:aff_a400:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:broadcom:sannav:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168351"
},
{
"db": "PACKETSTORM",
"id": "168228"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "170197"
}
],
"trust": 0.9
},
"cve": "CVE-2022-2068",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "COMPLETE",
"baseScore": 10.0,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 10.0,
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "HIGH",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULMON",
"availabilityImpact": "COMPLETE",
"baseScore": 10.0,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 10.0,
"id": "CVE-2022-2068",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "HIGH",
"trust": 0.1,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-2068",
"trust": 1.0,
"value": "CRITICAL"
},
{
"author": "CNNVD",
"id": "CNNVD-202206-2112",
"trust": 0.6,
"value": "CRITICAL"
},
{
"author": "VULMON",
"id": "CVE-2022-2068",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nThis advisory contains bug fixes and enhancements to the Submariner\ncontainer images. Description:\n\nRed Hat Ceph Storage is a scalable, open, software-defined storage platform\nthat combines the most stable version of the Ceph storage system with a\nCeph management platform, deployment utilities, and support services. \n\nSpace precludes documenting all of these changes in this advisory. Users\nare directed to the Red Hat Ceph Storage Release Notes for information on\nthe most significant of these changes:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index\n\nAll users of Red Hat Ceph Storage are advised to pull these new images from\nthe Red Hat Ecosystem catalog, which provides numerous enhancements and bug\nfixes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2115198 - build ceph containers for RHCS 5.2 release\n\n5. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.0 is now available. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig\nOADP-154 - Ensure support for backing up resources based on different label selectors\nOADP-194 - Remove the registry dependency from OADP\nOADP-199 - Enable support for restore of existing resources\nOADP-224 - Restore silently ignore resources if they exist - restore log not updated\nOADP-225 - Restore doesn\u0027t update velero.io/backup-name when a resource is updated\nOADP-234 - Implementation of incremental restore\nOADP-324 - Add label to Expired backups failing garbage collection\nOADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases\nOADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it\u0027s unable to find the zone\nOADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete\nOADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot\nOADP-528 - The volumesnapshotcontent is not removed for the synced backup\nOADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10\nOADP-538 - typo on noDefaultBackupLocation error on DPA CR\nOADP-552 - Validate OADP with 4.11 and Pod Security Admissions\nOADP-558 - Empty Failed Backup CRs can\u0027t be removed\nOADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version\nOADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly\nOADP-592 - OADP must-gather add support for insecure tls\nOADP-597 - BSL validation logs\nOADP-598 - Data mover performance on backup blocks backup process\nOADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl\nOADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled\nOADP-602 - Support GCP for openshift-velero-plugin registry\nOADP-605 - [OCP 4.11] CSI restore fails with admission webhook \\\"volumesnapshotclasses.snapshot.storage.k8s.io\\\" denied\nOADP-607 - DataMover: VSB is stuck on SnapshotBackupDone\nOADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace\nOADP-613 - DataMover: upstream documentation refers wrong CRs\nOADP-637 - Restic backup fails with CA certificate\nOADP-643 - [Data Mover] VSB and VSR names are not unique\nOADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable\nOADP-648 - Remove default limits for velero and restic pods\nOADP-652 - Data mover VolSync pod errors with Noobaa\nOADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace\nOADP-660 - Data mover restic secret does not support Azure\nOADP-698 - DataMover: volume-snapshot-mover pod points to upstream image\nOADP-715 - Restic restore fails: restic-wait container continuously fails with \"Not found: /restores/\u003cpod-volume\u003e/.velero/\u003crestore-UID\u003e\"\nOADP-716 - Incremental restore: second restore of a namespace partially fails\nOADP-736 - Data mover VSB always fails with volsync 0.5\n\n6. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes:\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* Submariner Globalnet e2e tests failed on MTU between On-Prem to Public\nclusters (BZ# 2074547)\n\n* OCP 4.11 - Install fails because of: pods\n\"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate\nagainst any security context constrain (BZ# 2082254)\n\n* subctl gather fails to gather libreswan data if CableDriver field is\nmissing/empty in Submariner Spec (BZ# 2083659)\n\n* Yaml editor for creating vSphere cluster moves to next line after typing\n(BZ# 2086883)\n\n* Submariner addon status doesn\u0027t track all deployment failures (BZ#\n2090311)\n\n* Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn\nwithout including s3 secret (BZ# 2091170)\n\n* After switching to ACM 2.5 the managed clusters log \"unable to create\nClusterClaim\" errors (BZ# 2095481)\n\n* Enforce failed and report the violation after modified memory value in\nlimitrange policy (BZ# 2100036)\n\n* Creating an application fails with \"This application has no subscription\nmatch selector (spec.selector.matchExpressions)\" (BZ# 2101577)\n\n* Inconsistent cluster resource statuses between \"All Subscription\"\ntopology and individual topologies (BZ# 2102273)\n\n* managed cluster is in \"unknown\" state for 120 mins after OADP restore\n\n* RHACM 2.5.2 images (BZ# 2104553)\n\n* Subscription UI does not allow binding to label with empty value (BZ#\n2104961)\n\n* Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ#\n2106069)\n\n* Region information is not available for Azure cloud in managedcluster CR\n(BZ# 2107134)\n\n* cluster uninstall log points to incorrect container name (BZ# 2107359)\n\n* ACM shows wrong path for Argo CD applicationset git generator (BZ#\n2107885)\n\n* Single node checkbox not visible for 4.11 images (BZ# 2109134)\n\n* Unable to deploy hypershift cluster when enabling\nvalidate-cluster-security (BZ# 2109544)\n\n* Deletion of Application (including app related resources) from the\nconsole fails to delete PlacementRule for the application (BZ# 20110026)\n\n* After the creation by a policy of job or deployment (in case the object\nis missing)ACM is trying to add new containers instead of updating (BZ#\n2117728)\n\n* pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)\n\n* ArgoCD and AppSet Applications do not deploy to local-cluster (BZ#\n2124707)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters\n2082254 - OCP 4.11 - Install fails because of: pods \"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate against any security context constraint\n2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec\n2086883 - Yaml editor for creating vSphere cluster moves to next line after typing\n2090311 - Submariner addon status doesn\u0027t track all deployment failures\n2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret\n2095481 - After switching to ACM 2.5 the managed clusters log \"unable to create ClusterClaim\" errors\n2100036 - Enforce failed and report the violation after modified memory value in limitrange policy\n2101577 - Creating an application fails with \"This application has no subscription match selector (spec.selector.matchExpressions)\"\n2102273 - Inconsistent cluster resource statuses between \"All Subscription\" topology and individual topologies\n2103653 - managed cluster is in \"unknown\" state for 120 mins after OADP restore\n2104553 - RHACM 2.5.2 images\n2104961 - Subscription UI does not allow binding to label with empty value\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD\n2107134 - Region information is not available for Azure cloud in managedcluster CR\n2107359 - cluster uninstall log points to incorrect container name\n2107885 - ACM shows wrong path for Argo CD applicationset git generator\n2109134 - Single node checkbox not visible for 4.11 images\n2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application\n2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating\n2122292 - pods in CrashLoopBackoff on 3.11 managed cluster\n2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2102694 - Fedora version in DataImportCrons is not \u0027latest\u0027\n2109407 - [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based\n2115371 - Unable to start windows VMs on PSI setups\n2119613 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129235 - [RFE] Add \"Copy SSH command\" to VM action list\n2134668 - Cannot edit ssh even vm is stopped\n2139453 - 4.11.1 rpms\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster\nLOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch\nLOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn\u0027t support multiple CAs\nLOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. \nLOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value\nLOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed\nLOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue\nLOG-3310 - [release-5.5] Can\u0027t choose correct CA ConfigMap Key when creating lokistack in Console\nLOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update\nAdvisory ID: RHSA-2022:8917-01\nProduct: Red Hat JBoss Web Server\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8917\nIssue date: 2022-12-12\nCVE Names: CVE-2022-1292 CVE-2022-2068\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat JBoss Web Server 5.7.1 on Red Hat\nEnterprise Linux versions 7, 8, and 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat JBoss Web Server 5.7 for RHEL 7 Server - x86_64\nRed Hat JBoss Web Server 5.7 for RHEL 8 - x86_64\nRed Hat JBoss Web Server 5.7 for RHEL 9 - x86_64\n\n3. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. \n\nThis release of Red Hat JBoss Web Server 5.7.1 serves as a replacement for\nRed Hat JBoss Web Server 5.7.0. This release includes bug fixes,\nenhancements and component upgrades, which are documented in the Release\nNotes, linked to in the References. \n\nSecurity Fix(es):\n\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat JBoss Web Server 5.7 for RHEL 7 Server:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el7jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el7jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el7jws.x86_64.rpm\n\nRed Hat JBoss Web Server 5.7 for RHEL 8:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el8jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el8jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el8jws.x86_64.rpm\n\nRed Hat JBoss Web Server 5.7 for RHEL 9:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el9jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el9jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el9jws.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY5dYDtzjgjWX9erEAQihfg/+JKRn1ponld/PXWb0JyTUZp2RsgqRlaoi\ndFWK8JVr3iIzA8pVUqiy+9fYqvRLvRNv8iyPezTFvlfi70FDLXd58QjxQd2zIcI2\ntvwFp3mFYfqT3iEz3PdvhiDpPx9XVeSuXgl8CglshJc4ARkLtdIJzkB6xoWl3fe0\nmyZzwJChpWzOYvZWZVzPRNzsuAi75pc/y8GwVh+fIlw3iySiskkspGVksXBmoBup\nXIM0O9ICMJ4jUbNTEZ0AwM6yZX1603sdvW60UarBVjf48vIM8x2ef6h84xEMB/3J\neLaUlm5Gm68CQx3Sf+ImCCmYcJ2LmX3KnBMGUhBiQGh2SlEJPKijlrHAhLX7M1YG\n/yvgd8plwRCAsYTlAJyhcXpBovNtP9io+S4kNy/j/HswvuUcJ+mrJNfZq6AwRnoF\ncNf2h1+Nl8VlT5YXkbZ0vRW1VbY7L4G1BCiqG2VGdjuOuynXh2URHsdKgs9zHY+5\nOMaV16fDbH23t04So+b4hxTsfelUUWEqyKk3qvZESNoFmWPCbaBpzDlawSGEFp5g\nLy0SN2cW39creXZ3uYioyMnHKeviSDGX8ik40c7mMYYaGnbgP1mPR8FWu9C3EoWi\n0LV3EDSHyFKFxUahjGzKKmjDQtYXPAt9Ci1Vp0OQFhKtAecfmlRZJEZRL4JCgKUd\nvabHaw7IH20=YAuF\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
},
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168351"
},
{
"db": "PACKETSTORM",
"id": "168228"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "170197"
}
],
"trust": 1.8
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-2068",
"trust": 2.6
},
{
"db": "SIEMENS",
"id": "SSA-332410",
"trust": 1.7
},
{
"db": "ICS CERT",
"id": "ICSA-22-319-01",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168022",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168351",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170197",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167713",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168204",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167948",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168284",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168538",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168112",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168222",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168182",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167564",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168187",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168387",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "169443",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1430",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3269",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3109",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5961",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3355",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6290",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4296",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4122",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4568",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4099",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4747",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3145",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4167",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4233",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4669",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6434",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4323",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3034",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3977",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3814",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4525",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4601",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5247",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070615",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070209",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062906",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070434",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022071151",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070712",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2112",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2022-2068",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168265",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168228",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168289",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170083",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170162",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168351"
},
{
"db": "PACKETSTORM",
"id": "168228"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "170197"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"id": "VAR-202206-1428",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.416330645
},
"last_update_date": "2024-07-23T19:47:22.503000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "OpenSSL Fixes for operating system command injection vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=197983"
},
{
"title": "Debian Security Advisories: DSA-5169-1 openssl -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6b57464ee127384d3d853e9cc99cf350"
},
{
"title": "Amazon Linux AMI: ALAS-2022-1626",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2022-1626"
},
{
"title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2022-2068"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1832",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1832"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1831",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1831"
},
{
"title": "Amazon Linux 2: ALASOPENSSL-SNAPSAFE-2023-001",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alasopenssl-snapsafe-2023-001"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-2068"
},
{
"title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228917 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228913 - security advisory"
},
{
"title": "Red Hat: Moderate: openssl security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225818 - security advisory"
},
{
"title": "Red Hat: Important: Red Hat Satellite Client security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235982 - security advisory"
},
{
"title": "Red Hat: Moderate: openssl security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226224 - security advisory"
},
{
"title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226517 - security advisory"
},
{
"title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226184 - security advisory"
},
{
"title": "Red Hat: Important: Satellite 6.11.5.6 async security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235980 - security advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-123",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-123"
},
{
"title": "Red Hat: Important: Satellite 6.12.5.2 Async Security Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235979 - security advisory"
},
{
"title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226422 - security advisory"
},
{
"title": "Brocade Security Advisories: Access Denied",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=8efbc4133194fcddd0bca99df112b683"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226103 - security advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-195",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-195"
},
{
"title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226188 - security advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226182 - security advisory"
},
{
"title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226051 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226283 - security advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226183 - security advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226507 - security advisory"
},
{
"title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227055 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227058 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228840 - security advisory"
},
{
"title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226024 - security advisory"
},
{
"title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226714 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226290 - security advisory"
},
{
"title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226348 - security advisory"
},
{
"title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226345 - security advisory"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228841 - security advisory"
},
{
"title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226346 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226430 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226370 - security advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226271 - security advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226696 - security advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228750 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226526 - security advisory"
},
{
"title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230408 - security advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228889 - security advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228781 - security advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory"
},
{
"title": "Smart Check Scan-Report",
"trust": 0.1,
"url": "https://github.com/mawinkler/c1-cs-scan-result "
},
{
"title": "Repository with scripts to verify system against CVE",
"trust": 0.1,
"url": "https://github.com/backloop-biz/vulnerability_checker "
},
{
"title": "https://github.com/jntass/TASSL-1.1.1",
"trust": 0.1,
"url": "https://github.com/jntass/tassl-1.1.1 "
},
{
"title": "Repository with scripts to verify system against CVE",
"trust": 0.1,
"url": "https://github.com/backloop-biz/cve_checks "
},
{
"title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories",
"trust": 0.1,
"url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories "
},
{
"title": "OpenSSL-CVE-lib",
"trust": 0.1,
"url": "https://github.com/chnzzh/openssl-cve-lib "
},
{
"title": "The Register",
"trust": 0.1,
"url": "https://www.theregister.co.uk/2022/06/27/openssl_304_memory_corruption_bug/"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-78",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.8,
"url": "https://www.debian.org/security/2022/dsa-5169"
},
{
"trust": 1.7,
"url": "https://www.openssl.org/news/secadv/20220621.txt"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220707-0008/"
},
{
"trust": 1.7,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2c9c35870601b4a44d86ddbf512b38df38285cfa"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=9639817dac8bbbaa64d09efad7464ccc405527c7"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.9,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.9,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.6,
"url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=9639817dac8bbbaa64d09efad7464ccc405527c7"
},
{
"trust": 0.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/"
},
{
"trust": 0.6,
"url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2c9c35870601b4a44d86ddbf512b38df38285cfa"
},
{
"trust": 0.6,
"url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9"
},
{
"trust": 0.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4747"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3977"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4669"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170197/red-hat-security-advisory-2022-8917-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3814"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168538/red-hat-security-advisory-2022-6696-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167948/red-hat-security-advisory-2022-5818-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168222/red-hat-security-advisory-2022-6283-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062906"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168182/red-hat-security-advisory-2022-6184-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6290"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168204/red-hat-security-advisory-2022-6224-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4099"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4296"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4233"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6434"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3145"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070209"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168378/red-hat-security-advisory-2022-6507-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5247"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5961"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3269"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167713/ubuntu-security-notice-usn-5488-2.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3109"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-2068/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168112/red-hat-security-advisory-2022-6051-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022071151"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168187/red-hat-security-advisory-2022-6188-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168284/red-hat-security-advisory-2022-6183-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1430"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-319-01"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168351/red-hat-security-advisory-2022-6430-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4167"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167564/ubuntu-security-notice-usn-5488-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3034"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070615"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168022/red-hat-security-advisory-2022-6024-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4122"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4323"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3355"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070434"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4525"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169443/red-hat-security-advisory-2022-7058-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070712"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168387/red-hat-security-advisory-2022-6517-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4601"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/78.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://github.com/backloop-biz/vulnerability_checker"
},
{
"trust": 0.1,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-319-01"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/alas-2022-1626.html"
},
{
"trust": 0.1,
"url": "https://submariner.io/getting-started/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6346"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://submariner.io/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/add-ons/submariner#submariner-deploy-console"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/1548993"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2789521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6024"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6430"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6290"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6507"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-36067"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6182"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28390"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24448"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27950"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3640"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0854"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-20368"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0865"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0562"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8781"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25255"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0168"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30002"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1016"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28893"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2078"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0617"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39399"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-36946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1055"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26373"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2938"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1355"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32189"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0909"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1048"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2880"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23960"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0908"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29581"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1184"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21499"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2639"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27664"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8917"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168351"
},
{
"db": "PACKETSTORM",
"id": "168228"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "170197"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168265"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168351"
},
{
"db": "PACKETSTORM",
"id": "168228"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "170197"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-06-21T00:00:00",
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"date": "2022-09-07T16:37:33",
"db": "PACKETSTORM",
"id": "168265"
},
{
"date": "2022-08-10T15:50:41",
"db": "PACKETSTORM",
"id": "168022"
},
{
"date": "2022-09-13T15:41:58",
"db": "PACKETSTORM",
"id": "168351"
},
{
"date": "2022-09-01T16:34:06",
"db": "PACKETSTORM",
"id": "168228"
},
{
"date": "2022-09-14T15:08:07",
"db": "PACKETSTORM",
"id": "168378"
},
{
"date": "2022-09-07T17:09:04",
"db": "PACKETSTORM",
"id": "168289"
},
{
"date": "2022-12-02T15:57:08",
"db": "PACKETSTORM",
"id": "170083"
},
{
"date": "2022-12-08T16:34:22",
"db": "PACKETSTORM",
"id": "170162"
},
{
"date": "2022-12-12T23:02:33",
"db": "PACKETSTORM",
"id": "170197"
},
{
"date": "2022-06-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202206-2112"
},
{
"date": "2022-06-21T15:15:09.060000",
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"date": "2023-03-09T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202206-2112"
},
{
"date": "2023-11-07T03:46:11.177000",
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "OpenSSL Operating system command injection vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
}
],
"trust": 0.6
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "operating system commend injection",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2112"
}
],
"trust": 0.6
}
}
VAR-202203-1690
Vulnerability from variot - Updated: 2024-07-23 19:43zlib before 1.2.12 allows memory corruption when deflating (i.e., when compressing) if the input has many distant matches. ========================================================================== Ubuntu Security Notice USN-5359-2 June 13, 2022
rsync vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
Summary:
rsync could be made to crash or run programs if it received specially crafted network traffic.
Software Description: - rsync: fast, versatile, remote (and local) file-copying tool
Details:
USN-5359-1 fixed vulnerabilities in rsync.
Original advisory details:
Danilo Ramos discovered that rsync incorrectly handled memory when performing certain zlib deflating operations. An attacker could use this issue to cause rsync to crash, resulting in a denial of service, or possibly execute arbitrary code.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: rsync 3.1.1-3ubuntu1.3+esm1
In general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):
2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2031958 - CVE-2021-43797 netty: control chars in header names may lead to HTTP request smuggling 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2334 - [release-5.3] Events listing out of order in Kibana 6.8.1 LOG-2450 - http.max_header_size set to 128kb causes communication with elasticsearch to stop working LOG-2481 - EO shouldn't grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.3]
- This update provides security fixes, bug fixes, and updates container images. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.4 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
Vm2: vulnerable to Sandbox Bypass (CVE-2021-23555)
-
Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
Follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
Node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
Follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
Urijs: Authorization Bypass Through User-Controlled Key (CVE-2022-0613)
-
Nconf: Prototype pollution in memory store (CVE-2022-21803)
-
Nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Urijs: Leading white space bypasses protocol validation (CVE-2022-24723)
-
Node-forge: Signature verification leniency in checking
digestAlgorithmstructure can lead to signature forgery (CVE-2022-24771) -
Node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
Node-forge: Signature verification leniency in checking
DigestInfostructure (CVE-2022-24773) -
Cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-1365)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
Bug fixes:
-
Failed ClusterDeployment validation errors do not surface through the ClusterPool UI (Bugzilla #1995380)
-
Agents wrong validation failure on failing to fetch image needed for installation (Bugzilla #2008583)
-
Fix catalogsource name (Bugzilla #2038250)
-
When the ocp console operator is disable on the managed cluster, the cluster claims failed to update (Bugzilla #2057761)
-
Multicluster-operators-hub-subscription OOMKilled (Bugzilla #2053308)
-
RHACM 2.4.1 Console becomes unstable and refuses login after one hour (Bugzilla #2061958)
-
RHACM 2.4.4 images (Bugzilla #2077548)
-
Bugs fixed (https://bugzilla.redhat.com/):
1995380 - failed ClusterDeployment validation errors do not surface through the ClusterPool UI
2008583 - Agents wrong validation failure on failing to fetch image needed for installation
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2038250 - Fix catalogsource name
2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053308 - multicluster-operators-hub-subscription OOMKilled
2054114 - CVE-2021-23555 vm2: vulnerable to Sandbox Bypass
2055496 - CVE-2022-0613 urijs: Authorization Bypass Through User-Controlled Key
2057761 - When the ocp console operator is disable on the managed cluster, the cluster claims failed to update
2058295 - ACM doesn't accept secret type opaque for cluster api certificate
2061958 - RHACM 2.4.1 Console becomes unstable and refuses login after one hour
2062370 - CVE-2022-24723 urijs: Leading white space bypasses protocol validation
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo structure
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store
2076133 - CVE-2022-1365 cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor
2077548 - RHACM 2.4.4 images
- Bugs fixed (https://bugzilla.redhat.com/):
2081686 - CVE-2022-29165 argocd: ArgoCD will blindly trust JWT claims if anonymous access is enabled 2081689 - CVE-2022-24905 argocd: Login screen allows message spoofing if SSO is enabled 2081691 - CVE-2022-24904 argocd: Symlink following allows leaking out-of-bound manifests and JSON files from Argo CD repo-server
-
8) - noarch
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.7 Release Notes linked from the References section. Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications.
Security Fix(es):
- argocd: vulnerable to a variety of attacks when an SSO login is initiated from the Argo CD CLI or the UI. Bugs fixed (https://bugzilla.redhat.com/):
2096278 - CVE-2022-31035 argocd: cross-site scripting (XSS) allow a malicious user to inject a javascript link in the UI 2096282 - CVE-2022-31034 argocd: vulnerable to a variety of attacks when an SSO login is initiated from the Argo CD CLI or the UI. 2096283 - CVE-2022-31016 argocd: vulnerable to an uncontrolled memory consumption bug 2096291 - CVE-2022-31036 argocd: vulnerable to a symlink following bug allowing a malicious user with repository write access
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: zlib security update Advisory ID: RHSA-2022:2213-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:2213 Issue date: 2022-05-11 CVE Names: CVE-2018-25032 ==================================================================== 1. Summary:
An update for zlib is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The zlib packages provide a general-purpose lossless data compression library that is used by many different programs.
Security Fix(es):
- zlib: A flaw found in zlib when compressing (not decompressing) certain inputs (CVE-2018-25032)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2067945 - CVE-2018-25032 zlib: A flaw found in zlib when compressing (not decompressing) certain inputs
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
ppc64: zlib-1.2.7-20.el7_9.ppc.rpm zlib-1.2.7-20.el7_9.ppc64.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm zlib-devel-1.2.7-20.el7_9.ppc.rpm zlib-devel-1.2.7-20.el7_9.ppc64.rpm
ppc64le: zlib-1.2.7-20.el7_9.ppc64le.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm zlib-devel-1.2.7-20.el7_9.ppc64le.rpm
s390x: zlib-1.2.7-20.el7_9.s390.rpm zlib-1.2.7-20.el7_9.s390x.rpm zlib-debuginfo-1.2.7-20.el7_9.s390.rpm zlib-debuginfo-1.2.7-20.el7_9.s390x.rpm zlib-devel-1.2.7-20.el7_9.s390.rpm zlib-devel-1.2.7-20.el7_9.s390x.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: minizip-1.2.7-20.el7_9.ppc.rpm minizip-1.2.7-20.el7_9.ppc64.rpm minizip-devel-1.2.7-20.el7_9.ppc.rpm minizip-devel-1.2.7-20.el7_9.ppc64.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm zlib-static-1.2.7-20.el7_9.ppc.rpm zlib-static-1.2.7-20.el7_9.ppc64.rpm
ppc64le: minizip-1.2.7-20.el7_9.ppc64le.rpm minizip-devel-1.2.7-20.el7_9.ppc64le.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm zlib-static-1.2.7-20.el7_9.ppc64le.rpm
s390x: minizip-1.2.7-20.el7_9.s390.rpm minizip-1.2.7-20.el7_9.s390x.rpm minizip-devel-1.2.7-20.el7_9.s390.rpm minizip-devel-1.2.7-20.el7_9.s390x.rpm zlib-debuginfo-1.2.7-20.el7_9.s390.rpm zlib-debuginfo-1.2.7-20.el7_9.s390x.rpm zlib-static-1.2.7-20.el7_9.s390.rpm zlib-static-1.2.7-20.el7_9.s390x.rpm
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYnw1+9zjgjWX9erEAQhePQ//UtM5hhHSzE0ZKC4Z9/u34cRNcqIc5nmT opYgZo/hPWp5kkh0R9/tAMWAEa7olBzfzsxulOkm2I65R6k/+fLKaXeQOcwMAkSH gyKBU2TG3+ziT1BrsXBDWAse9mqU+zX7t9rDUZ8u9g30qr/9xrDtrVb0b4Sypslf K5CEMHoskqCnHdl2j+vPOyOCwq8KxLMPBAYtY/X51JwLtT8thvmCQrPWANvWjoSq nDhdVsWpBtPNnsgBqg8Jv+9YhEHJTaa3wVPVorzgP2Bo4W8gmiiukSK9Sv3zcCTu lJnSolqBBU7NmGdQooPrUlUoqJUKXfFXgu+mjybTym8Fdoe0lnxLFSvoEeAr9Swo XlFeBrOR8F5SO16tYKCAtyhafmJn+8MisTPN0NmUD7VLAJ0FzhEk48dlLl5+EoAy AlxiuqgKh+O1zFRN80RSvYkPjWKU6KyK8QJaSKdroGcMjNkjhZ3cM6bpVP6V75F3 CcLZWlP5d18qgfL/SRZo8NG23h+Fzz6FWNSQQZse27NS3BZsM4PVsHF5oaRN3Vij AFwDmIhHL7pE8pZaWck7qevt3i/hwzwYWV5VYYRgkYQIvveE0WUM/kqm+wqlU50Y bbpALcI5h9b83JgteVQG0hf9h5avYzgGrfbj+FOEVPPN86K37ILDvT45VcSjf1vO 4nrrtbUzAhY=Pgu3 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202203-1690",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.7.5"
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "13.46"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "17.32"
},
{
"model": "hci compute node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "management services for element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.9.2"
},
{
"model": "oncommand workflow automation",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "11.54"
},
{
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.9.13"
},
{
"model": "scalance sc632-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.5.17"
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "scalance sc646-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.8.0"
},
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.3.36"
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "6.45"
},
{
"model": "gotoassist",
"scope": "lt",
"trust": 1.0,
"vendor": "goto",
"version": "11.9.18"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.4.0"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.5.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.7.0"
},
{
"model": "scalance sc626-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.7.14"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "scalance sc636-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.6.6"
},
{
"model": "e-series santricity os controller",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.70.2"
},
{
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.8.0"
},
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.8.4"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.15"
},
{
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.8.14"
},
{
"model": "zlib",
"scope": "lt",
"trust": 1.0,
"vendor": "zlib",
"version": "1.2.12"
},
{
"model": "scalance sc622-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0.0"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.3.0"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.6.0"
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "15.38"
},
{
"model": "scalance sc642-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.10.5"
},
{
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.6.9"
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "7.52"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.4"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.9.0"
},
{
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "8.60"
},
{
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.7.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.10.0"
},
{
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.4.26"
},
{
"model": "e-series santricity os controller",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.0.0"
},
{
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.9.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:zlib:zlib:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "1.2.12",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.15.7",
"versionStartIncluding": "10.15",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-005:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-007:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:-:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-006:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-008:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-007:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.6.6",
"versionStartIncluding": "11.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.4",
"versionStartIncluding": "12.0.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.10.5",
"versionStartIncluding": "3.10.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.9.13",
"versionStartIncluding": "3.9.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.7.14",
"versionStartIncluding": "3.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.8.14",
"versionStartIncluding": "3.8.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.3.36",
"versionStartIncluding": "10.3.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.4.26",
"versionStartIncluding": "10.4.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.5.17",
"versionStartIncluding": "10.5.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.6.9",
"versionStartIncluding": "10.6.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.7.5",
"versionStartIncluding": "10.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.8.4",
"versionStartIncluding": "10.8.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.9.2",
"versionStartIncluding": "10.9.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:management_services_for_element_software:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:e-series_santricity_os_controller:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "11.70.2",
"versionStartIncluding": "11.0.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc622-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc622-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc626-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc626-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc632-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc632-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc636-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc636-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc642-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc642-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:siemens:scalance_sc646-2c_firmware:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:siemens:scalance_sc646-2c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:7.52:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:8.60:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:11.54:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:13.46:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:15.38:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:17.32:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:azul:zulu:6.45:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:goto:gotoassist:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.9.18",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "167381"
},
{
"db": "PACKETSTORM",
"id": "167140"
},
{
"db": "PACKETSTORM",
"id": "167122"
},
{
"db": "PACKETSTORM",
"id": "166946"
},
{
"db": "PACKETSTORM",
"id": "166970"
},
{
"db": "PACKETSTORM",
"id": "167225"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167568"
},
{
"db": "PACKETSTORM",
"id": "167133"
}
],
"trust": 0.9
},
"cve": "CVE-2018-25032",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-418557",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2018-25032",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-418557",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "zlib before 1.2.12 allows memory corruption when deflating (i.e., when compressing) if the input has many distant matches. ==========================================================================\nUbuntu Security Notice USN-5359-2\nJune 13, 2022\n\nrsync vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n\nSummary:\n\nrsync could be made to crash or run programs if it received\nspecially crafted network traffic. \n\nSoftware Description:\n- rsync: fast, versatile, remote (and local) file-copying tool\n\nDetails:\n\nUSN-5359-1 fixed vulnerabilities in rsync. \n\nOriginal advisory details:\n\n Danilo Ramos discovered that rsync incorrectly handled memory when\n performing certain zlib deflating operations. An attacker could use this\n issue to cause rsync to crash, resulting in a denial of service, or\n possibly execute arbitrary code. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n rsync 3.1.1-3ubuntu1.3+esm1\n\nIn general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2031958 - CVE-2021-43797 netty: control chars in header names may lead to HTTP request smuggling\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2334 - [release-5.3] Events listing out of order in Kibana 6.8.1\nLOG-2450 - http.max_header_size set to 128kb causes communication with elasticsearch to stop working\nLOG-2481 - EO shouldn\u0027t grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.3]\n\n6. This update provides security fixes, bug\nfixes, and updates container images. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.4 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes:\n\n* Vm2: vulnerable to Sandbox Bypass (CVE-2021-23555)\n\n* Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* Follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* Node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* Follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* Urijs: Authorization Bypass Through User-Controlled Key (CVE-2022-0613)\n\n* Nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* Nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Urijs: Leading white space bypasses protocol validation (CVE-2022-24723)\n\n* Node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* Node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* Node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Cross-fetch: Exposure of Private Personal Information to an Unauthorized\nActor (CVE-2022-1365)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\nBug fixes:\n\n* Failed ClusterDeployment validation errors do not surface through the\nClusterPool UI (Bugzilla #1995380)\n\n* Agents wrong validation failure on failing to fetch image needed for\ninstallation (Bugzilla #2008583)\n\n* Fix catalogsource name (Bugzilla #2038250)\n\n* When the ocp console operator is disable on the managed cluster, the\ncluster claims failed to update (Bugzilla #2057761)\n\n* Multicluster-operators-hub-subscription OOMKilled (Bugzilla #2053308)\n\n* RHACM 2.4.1 Console becomes unstable and refuses login after one hour\n(Bugzilla #2061958)\n\n* RHACM 2.4.4 images (Bugzilla #2077548)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995380 - failed ClusterDeployment validation errors do not surface through the ClusterPool UI\n2008583 - Agents wrong validation failure on failing to fetch image needed for installation\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2038250 - Fix catalogsource name\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053308 - multicluster-operators-hub-subscription OOMKilled\n2054114 - CVE-2021-23555 vm2: vulnerable to Sandbox Bypass\n2055496 - CVE-2022-0613 urijs: Authorization Bypass Through User-Controlled Key\n2057761 - When the ocp console operator is disable on the managed cluster, the cluster claims failed to update\n2058295 - ACM doesn\u0027t accept secret type opaque for cluster api certificate\n2061958 - RHACM 2.4.1 Console becomes unstable and refuses login after one hour\n2062370 - CVE-2022-24723 urijs: Leading white space bypasses protocol validation\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2076133 - CVE-2022-1365 cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor\n2077548 - RHACM 2.4.4 images\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2081686 - CVE-2022-29165 argocd: ArgoCD will blindly trust JWT claims if anonymous access is enabled\n2081689 - CVE-2022-24905 argocd: Login screen allows message spoofing if SSO is enabled\n2081691 - CVE-2022-24904 argocd: Symlink following allows leaking out-of-bound manifests and JSON files from Argo CD repo-server\n\n5. 8) - noarch\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.7 Release Notes linked from the References section. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. \n\nSecurity Fix(es):\n\n* argocd: vulnerable to a variety of attacks when an SSO login is initiated\nfrom the Argo CD CLI or the UI. Bugs fixed (https://bugzilla.redhat.com/):\n\n2096278 - CVE-2022-31035 argocd: cross-site scripting (XSS) allow a malicious user to inject a javascript link in the UI\n2096282 - CVE-2022-31034 argocd: vulnerable to a variety of attacks when an SSO login is initiated from the Argo CD CLI or the UI. \n2096283 - CVE-2022-31016 argocd: vulnerable to an uncontrolled memory consumption bug\n2096291 - CVE-2022-31036 argocd: vulnerable to a symlink following bug allowing a malicious user with repository write access\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: zlib security update\nAdvisory ID: RHSA-2022:2213-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:2213\nIssue date: 2022-05-11\nCVE Names: CVE-2018-25032\n====================================================================\n1. Summary:\n\nAn update for zlib is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe zlib packages provide a general-purpose lossless data compression\nlibrary that is used by many different programs. \n\nSecurity Fix(es):\n\n* zlib: A flaw found in zlib when compressing (not decompressing) certain\ninputs (CVE-2018-25032)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2067945 - CVE-2018-25032 zlib: A flaw found in zlib when compressing (not decompressing) certain inputs\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nppc64:\nzlib-1.2.7-20.el7_9.ppc.rpm\nzlib-1.2.7-20.el7_9.ppc64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm\nzlib-devel-1.2.7-20.el7_9.ppc.rpm\nzlib-devel-1.2.7-20.el7_9.ppc64.rpm\n\nppc64le:\nzlib-1.2.7-20.el7_9.ppc64le.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm\nzlib-devel-1.2.7-20.el7_9.ppc64le.rpm\n\ns390x:\nzlib-1.2.7-20.el7_9.s390.rpm\nzlib-1.2.7-20.el7_9.s390x.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390x.rpm\nzlib-devel-1.2.7-20.el7_9.s390.rpm\nzlib-devel-1.2.7-20.el7_9.s390x.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nminizip-1.2.7-20.el7_9.ppc.rpm\nminizip-1.2.7-20.el7_9.ppc64.rpm\nminizip-devel-1.2.7-20.el7_9.ppc.rpm\nminizip-devel-1.2.7-20.el7_9.ppc64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm\nzlib-static-1.2.7-20.el7_9.ppc.rpm\nzlib-static-1.2.7-20.el7_9.ppc64.rpm\n\nppc64le:\nminizip-1.2.7-20.el7_9.ppc64le.rpm\nminizip-devel-1.2.7-20.el7_9.ppc64le.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm\nzlib-static-1.2.7-20.el7_9.ppc64le.rpm\n\ns390x:\nminizip-1.2.7-20.el7_9.s390.rpm\nminizip-1.2.7-20.el7_9.s390x.rpm\nminizip-devel-1.2.7-20.el7_9.s390.rpm\nminizip-devel-1.2.7-20.el7_9.s390x.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390x.rpm\nzlib-static-1.2.7-20.el7_9.s390.rpm\nzlib-static-1.2.7-20.el7_9.s390x.rpm\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnw1+9zjgjWX9erEAQhePQ//UtM5hhHSzE0ZKC4Z9/u34cRNcqIc5nmT\nopYgZo/hPWp5kkh0R9/tAMWAEa7olBzfzsxulOkm2I65R6k/+fLKaXeQOcwMAkSH\ngyKBU2TG3+ziT1BrsXBDWAse9mqU+zX7t9rDUZ8u9g30qr/9xrDtrVb0b4Sypslf\nK5CEMHoskqCnHdl2j+vPOyOCwq8KxLMPBAYtY/X51JwLtT8thvmCQrPWANvWjoSq\nnDhdVsWpBtPNnsgBqg8Jv+9YhEHJTaa3wVPVorzgP2Bo4W8gmiiukSK9Sv3zcCTu\nlJnSolqBBU7NmGdQooPrUlUoqJUKXfFXgu+mjybTym8Fdoe0lnxLFSvoEeAr9Swo\nXlFeBrOR8F5SO16tYKCAtyhafmJn+8MisTPN0NmUD7VLAJ0FzhEk48dlLl5+EoAy\nAlxiuqgKh+O1zFRN80RSvYkPjWKU6KyK8QJaSKdroGcMjNkjhZ3cM6bpVP6V75F3\nCcLZWlP5d18qgfL/SRZo8NG23h+Fzz6FWNSQQZse27NS3BZsM4PVsHF5oaRN3Vij\nAFwDmIhHL7pE8pZaWck7qevt3i/hwzwYWV5VYYRgkYQIvveE0WUM/kqm+wqlU50Y\nbbpALcI5h9b83JgteVQG0hf9h5avYzgGrfbj+FOEVPPN86K37ILDvT45VcSjf1vO\n4nrrtbUzAhY=Pgu3\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2018-25032"
},
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "PACKETSTORM",
"id": "167486"
},
{
"db": "PACKETSTORM",
"id": "167381"
},
{
"db": "PACKETSTORM",
"id": "167140"
},
{
"db": "PACKETSTORM",
"id": "167122"
},
{
"db": "PACKETSTORM",
"id": "166946"
},
{
"db": "PACKETSTORM",
"id": "166970"
},
{
"db": "PACKETSTORM",
"id": "167225"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167568"
},
{
"db": "PACKETSTORM",
"id": "167133"
}
],
"trust": 1.89
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-418557",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2018-25032",
"trust": 2.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/28/3",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/26/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/28/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/24/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/25/2",
"trust": 1.1
},
{
"db": "SIEMENS",
"id": "SSA-333517",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "167133",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167381",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167122",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167225",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167140",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169782",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166946",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167568",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166970",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167486",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166552",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168352",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168042",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166967",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167327",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167391",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167400",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167956",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167088",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167142",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167346",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171157",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169897",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168696",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167008",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167602",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167277",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167330",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167485",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167679",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167334",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167116",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167389",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166563",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166555",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167223",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170003",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167555",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168036",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167224",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167260",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167134",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167364",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167594",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167461",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171152",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167188",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167591",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168011",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167271",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167936",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167138",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167189",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167586",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167186",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167281",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169624",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167470",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167265",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168392",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167119",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167136",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167674",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167622",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167124",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-418557",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "PACKETSTORM",
"id": "167486"
},
{
"db": "PACKETSTORM",
"id": "167381"
},
{
"db": "PACKETSTORM",
"id": "167140"
},
{
"db": "PACKETSTORM",
"id": "167122"
},
{
"db": "PACKETSTORM",
"id": "166946"
},
{
"db": "PACKETSTORM",
"id": "166970"
},
{
"db": "PACKETSTORM",
"id": "167225"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167568"
},
{
"db": "PACKETSTORM",
"id": "167133"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"id": "VAR-202203-1690",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
}
],
"trust": 0.6383838399999999
},
"last_update_date": "2024-07-23T19:43:54.586000Z",
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-333517.pdf"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220729-0004/"
},
{
"trust": 1.1,
"url": "https://github.com/madler/zlib/compare/v1.2.11...v1.2.12"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220526-0009/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213255"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213256"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213257"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5111"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/may/38"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/may/35"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/may/33"
},
{
"trust": 1.1,
"url": "https://security.gentoo.org/glsa/202210-42"
},
{
"trust": 1.1,
"url": "https://github.com/madler/zlib/commit/5c44459c3b28a9bd3283aaceab7c615f8020c531"
},
{
"trust": 1.1,
"url": "https://github.com/madler/zlib/issues/605"
},
{
"trust": 1.1,
"url": "https://www.openwall.com/lists/oss-security/2022/03/24/1"
},
{
"trust": 1.1,
"url": "https://www.openwall.com/lists/oss-security/2022/03/28/1"
},
{
"trust": 1.1,
"url": "https://www.openwall.com/lists/oss-security/2022/03/28/3"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/04/msg00000.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/05/msg00008.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/09/msg00023.html"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2022/03/25/2"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2022/03/26/1"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/dczfijbjtz7cl5qxbfktq22q26vinruf/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/df62mvmh3qugmbdcb3dy2erq6ebhtadb/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jzzptwryqulaol3aw7rzjnvz2uonxcv4/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ns2d2gfpfgojul4wq3duay7hf4vwq77f/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/voknp2l734ael47nrygvzikefoubqy5y/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/xokfmsnq5d5wgmalbnbxu3ge442v74wu/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.9,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.9,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25636"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-25636"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4028"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24904"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24905"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24904"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29165"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41617"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4189"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29165"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24905"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43797"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0759"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21426"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21443"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21476"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37137"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21496"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43797"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21496"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37137"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21434"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21443"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37136"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21434"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21426"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37136"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21476"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0759"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21803"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24723"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0155"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4115"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24723"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4115"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21803"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0613"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0613"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/voknp2l734ael47nrygvzikefoubqy5y/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jzzptwryqulaol3aw7rzjnvz2uonxcv4/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ns2d2gfpfgojul4wq3duay7hf4vwq77f/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/df62mvmh3qugmbdcb3dy2erq6ebhtadb/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/dczfijbjtz7cl5qxbfktq22q26vinruf/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xokfmsnq5d5wgmalbnbxu3ge442v74wu/"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5359-1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5359-2"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4671"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:2218"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:2217"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1681"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1365"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1365"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24772"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23555"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23555"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24773"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0711"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0711"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1715"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3639"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4690"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3639"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31036"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31034"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31034"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31016"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31016"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31036"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5152"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:2213"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "PACKETSTORM",
"id": "167486"
},
{
"db": "PACKETSTORM",
"id": "167381"
},
{
"db": "PACKETSTORM",
"id": "167140"
},
{
"db": "PACKETSTORM",
"id": "167122"
},
{
"db": "PACKETSTORM",
"id": "166946"
},
{
"db": "PACKETSTORM",
"id": "166970"
},
{
"db": "PACKETSTORM",
"id": "167225"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167568"
},
{
"db": "PACKETSTORM",
"id": "167133"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "PACKETSTORM",
"id": "167486"
},
{
"db": "PACKETSTORM",
"id": "167381"
},
{
"db": "PACKETSTORM",
"id": "167140"
},
{
"db": "PACKETSTORM",
"id": "167122"
},
{
"db": "PACKETSTORM",
"id": "166946"
},
{
"db": "PACKETSTORM",
"id": "166970"
},
{
"db": "PACKETSTORM",
"id": "167225"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167568"
},
{
"db": "PACKETSTORM",
"id": "167133"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-03-25T00:00:00",
"db": "VULHUB",
"id": "VHN-418557"
},
{
"date": "2022-06-19T16:39:51",
"db": "PACKETSTORM",
"id": "167486"
},
{
"date": "2022-06-03T15:43:30",
"db": "PACKETSTORM",
"id": "167381"
},
{
"date": "2022-05-12T15:53:27",
"db": "PACKETSTORM",
"id": "167140"
},
{
"date": "2022-05-12T15:38:35",
"db": "PACKETSTORM",
"id": "167122"
},
{
"date": "2022-05-04T05:42:06",
"db": "PACKETSTORM",
"id": "166946"
},
{
"date": "2022-05-05T17:33:41",
"db": "PACKETSTORM",
"id": "166970"
},
{
"date": "2022-05-19T15:53:12",
"db": "PACKETSTORM",
"id": "167225"
},
{
"date": "2022-11-08T13:50:54",
"db": "PACKETSTORM",
"id": "169782"
},
{
"date": "2022-06-22T15:07:32",
"db": "PACKETSTORM",
"id": "167568"
},
{
"date": "2022-05-12T15:51:01",
"db": "PACKETSTORM",
"id": "167133"
},
{
"date": "2022-03-25T09:15:08.187000",
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-11T00:00:00",
"db": "VULHUB",
"id": "VHN-418557"
},
{
"date": "2023-11-07T02:56:26.393000",
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Ubuntu Security Notice USN-5359-2",
"sources": [
{
"db": "PACKETSTORM",
"id": "167486"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "spoof",
"sources": [
{
"db": "PACKETSTORM",
"id": "167381"
},
{
"db": "PACKETSTORM",
"id": "167225"
}
],
"trust": 0.2
}
}
VAR-202202-0906
Vulnerability from variot - Updated: 2024-07-23 19:35valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes Advisory ID: RHSA-2022:1081-01 Product: Red Hat ACM Advisory URL: https://access.redhat.com/errata/RHSA-2022:1081 Issue date: 2022-03-28 CVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2021-3200 CVE-2021-3445 CVE-2021-3521 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-3999 CVE-2021-20231 CVE-2021-20232 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23177 CVE-2021-28153 CVE-2021-31566 CVE-2021-33560 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 CVE-2021-43565 CVE-2022-23218 CVE-2022-23219 CVE-2022-23308 CVE-2022-23806 CVE-2022-24407 ==================================================================== 1. Summary:
Gatekeeper Operator v0.2
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters.
This advisory contains the container images for Gatekeeper that include security updates, and container upgrades.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Note: Gatekeeper support from the Red Hat support team is limited cases where it is integrated and used with Red Hat Advanced Cluster Management for Kubernetes. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security updates:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
installPlanApprovalset toAutomatic. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApprovaltomanual, then you must view each cluster to manually approve the upgrade to the operator.
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
-
- Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'.
Refer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/ for additional information.
- Bugs fixed (https://bugzilla.redhat.com/):
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
- References:
https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3712 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3999 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-42574 https://access.redhat.com/security/cve/CVE-2021-43565 https://access.redhat.com/security/cve/CVE-2022-23218 https://access.redhat.com/security/cve/CVE-2022-23219 https://access.redhat.com/security/cve/CVE-2022-23308 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate https://open-policy-agent.github.io/gatekeeper/website/docs/howto/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 8. Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Security Fix(es):
- libxml2: Use-after-free of ID and IDREF attributes (CVE-2022-23308)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect. Package List:
Red Hat Enterprise Linux AppStream (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security updates:
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
nodejs-shelljs: improper privilege management (CVE-2022-0144)
-
follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
Bug fix:
-
RHACM 2.3.8 images (Bugzilla #2062316)
-
Bugs fixed (https://bugzilla.redhat.com/):
2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2062316 - RHACM 2.3.8 images
- Alternatively, on your watch, select "My Watch > General > About". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-05-16-4 Security Update 2022-004 Catalina
Security Update 2022-004 Catalina addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213255.
apache Available for: macOS Catalina Impact: Multiple issues in apache Description: Multiple issues were addressed by updating apache to version 2.4.53. CVE-2021-44224 CVE-2021-44790 CVE-2022-22719 CVE-2022-22720 CVE-2022-22721
AppKit Available for: macOS Catalina Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22665: Lockheed Martin Red Team
AppleGraphicsControl Available for: macOS Catalina Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day Initiative
AppleScript Available for: macOS Catalina Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-26697: Qi Sun and Robert Ai of Trend Micro
AppleScript Available for: macOS Catalina Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-26698: Qi Sun of Trend Micro
CoreTypes Available for: macOS Catalina Impact: A malicious application may bypass Gatekeeper checks Description: This issue was addressed with improved checks to prevent unauthorized actions. CVE-2022-22663: Arsenii Kostromin (0x3c3e)
CVMS Available for: macOS Catalina Impact: A malicious application may be able to gain root privileges Description: A memory initialization issue was addressed. CVE-2022-26721: Yonghwi Jin (@jinmo123) of Theori CVE-2022-26722: Yonghwi Jin (@jinmo123) of Theori
DriverKit Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with system privileges Description: An out-of-bounds access issue was addressed with improved bounds checking. CVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)
Graphics Drivers Available for: macOS Catalina Impact: A local user may be able to read kernel memory Description: An out-of-bounds read issue existed that led to the disclosure of kernel memory. This was addressed with improved input validation. CVE-2022-22674: an anonymous researcher
Intel Graphics Driver Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26720: Liu Long of Ant Security Light-Year Lab
Intel Graphics Driver Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds read issue was addressed with improved input validation. CVE-2022-26770: Liu Long of Ant Security Light-Year Lab
Intel Graphics Driver Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-26756: Jack Dates of RET2 Systems, Inc
Intel Graphics Driver Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26769: Antonio Zekic (@antoniozekic)
Intel Graphics Driver Available for: macOS Catalina Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-26748: Jeonghoon Shin of Theori working with Trend Micro Zero Day Initiative
Kernel Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-26714: Peter Nguyễn Vũ Hoàng (@peternguyen14) of STAR Labs (@starlabs_sg)
Kernel Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-26757: Ned Williamson of Google Project Zero
libresolv Available for: macOS Catalina Impact: An attacker may be able to cause unexpected application termination or arbitrary code execution Description: An integer overflow was addressed with improved input validation. CVE-2022-26775: Max Shavrick (@_mxms) of the Google Security Team
LibreSSL Available for: macOS Catalina Impact: Processing a maliciously crafted certificate may lead to a denial of service Description: A denial of service issue was addressed with improved input validation. CVE-2022-0778
libxml2 Available for: macOS Catalina Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2022-23308
OpenSSL Available for: macOS Catalina Impact: Processing a maliciously crafted certificate may lead to a denial of service Description: This issue was addressed with improved checks. CVE-2022-0778
PackageKit Available for: macOS Catalina Impact: A malicious application may be able to modify protected parts of the file system Description: This issue was addressed with improved entitlements. CVE-2022-26727: Mickey Jin (@patch1t)
Printing Available for: macOS Catalina Impact: A malicious application may be able to bypass Privacy preferences Description: This issue was addressed by removing the vulnerable code. CVE-2022-26746: @gorelics
Security Available for: macOS Catalina Impact: A malicious app may be able to bypass signature validation Description: A certificate parsing issue was addressed with improved checks. CVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)
SMB Available for: macOS Catalina Impact: An application may be able to gain elevated privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26715: Peter Nguyễn Vũ Hoàng of STAR Labs
SoftwareUpdate Available for: macOS Catalina Impact: A malicious application may be able to access restricted files Description: This issue was addressed with improved entitlements. CVE-2022-26728: Mickey Jin (@patch1t)
TCC Available for: macOS Catalina Impact: An app may be able to capture a user's screen Description: This issue was addressed with improved checks. CVE-2022-26726: an anonymous researcher
Tcl Available for: macOS Catalina Impact: A malicious application may be able to break out of its sandbox Description: This issue was addressed with improved environment sanitization. CVE-2022-26755: Arsenii Kostromin (0x3c3e)
WebKit Available for: macOS Catalina Impact: Processing a maliciously crafted mail message may lead to running arbitrary javascript Description: A validation issue was addressed with improved input sanitization. CVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu of Palo Alto Networks (paloaltonetworks.com)
Wi-Fi Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2022-26761: Wang Yu of Cyberserval
zip Available for: macOS Catalina Impact: Processing a maliciously crafted file may lead to a denial of service Description: A denial of service issue was addressed with improved state handling. CVE-2022-0530
zlib Available for: macOS Catalina Impact: An attacker may be able to cause unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2018-25032: Tavis Ormandy
zsh Available for: macOS Catalina Impact: A remote attacker may be able to cause arbitrary code execution Description: This issue was addressed by updating to zsh version 5.8.1. CVE-2021-45444
Additional recognition
PackageKit We would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for their assistance.
Security Update 2022-004 Catalina may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TYACgkQeC9qKD1p rhjgGRAAggg84uE4zYtBHmo5Qz45wlY/+FT7bSyCyo2Ta0m3JQmm26UiS9ZzXlD0 58jCo/ti+gH/gqwU05SnaG88pSMT6VKaDDnmw8WcrPtbl6NN6JX8vaZLFLoGO0dB rjwap7ulcLe7/HM8kCz3qqjKj4fusxckCjmm5yBMtuMklq7i51vzkT/+ws00ALcH 4S821CqIJlS2RIho/M/pih5A/H1Onw/nzKc7VOWjWMmmwoV+oiL4gMPE9kyIAJFQ NcZO7s70Qp9N5Z0VGIkD5HkAntEqYGNKJuCQUrHS0fHFUxVrQcuBbbSiv7vwnOT0 NVcFKBQWJtfcqmtcDF8mVi2ocqUh7So6AXhZGZtL3CrVfNMgTcjq6y5XwzXMgwlm ezMX73MnV91QuGp6KVZEmoFNlJ2dhKcJ0fYAhhW9DJqvJ1u5xIkQrUkK/ERLnWpE 9DIapT8uUbb9Zgez/tS9szv5jHhKtOoPbprju7d7LHw7XMFCVKbUvx745dFZx0AG PLsJZQNsQZJIK8QdcLA50KrlyjR2ts4nUsKj07I6LR4wUmcaj+goXYq4Nh4WLnoF x1AXD5ztdYlhqMcTAnuAbUYfuki0uzSy0p7wBiTknFwKMZNIaiToo64BES+7Iu1i vrB9SdtTSQCMXgPZX1Al1e2F/K2ubovrGU9geAEwLMq3AKudI4g= =JBHs -----END PGP SIGNATURE-----
. Summary:
The Migration Toolkit for Containers (MTC) 1.7.1 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2020725 - CVE-2021-41771 golang: debug/macho: invalid dynamic symbol table command can cause panic 2020736 - CVE-2021-41772 golang: archive/zip: Reader.Open panics on empty string 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2040378 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [backend] 2057516 - [MTC UI] UI should not allow PVC mapping for Full migration 2060244 - [MTC] DIM registry route need to be exposed to create inter-cluster state migration plans 2060717 - [MTC] Registry pod goes in CrashLoopBackOff several times when MCG Nooba is used as the Replication Repository 2061347 - [MTC] Log reader pod is missing velero and restic pod logs. 2061653 - [MTC UI] Migration Resources section showing pods from other namespaces 2062682 - [MTC] Destination storage class non-availability warning visible in Intra-cluster source to source state-migration migplan. 2065837 - controller_config.yml.j2 merge type should be set to merge (currently using the default strategic) 2071000 - Storage Conversion: UI doesn't have the ability to skip PVC 2072036 - Migration plan for storage conversion cannot be created if there's no replication repository 2072186 - Wrong migration type description 2072684 - Storage Conversion: PersistentVolumeClaimTemplates in StatefulSets are not updated automatically after migration 2073496 - Errors in rsync pod creation are not printed in the controller logs 2079814 - [MTC UI] Intra-cluster state migration plan showing a warning on PersistentVolumes page
- ========================================================================== Ubuntu Security Notice USN-5422-1 May 16, 2022
libxml2 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in libxml2. This issue only affected Ubuntu 14.04 ESM, and Ubuntu 16.04 ESM. (CVE-2022-23308)
It was discovered that libxml2 incorrectly handled certain XML files. An attacker could possibly use this issue to cause a crash or execute arbitrary code. (CVE-2022-29824)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libxml2 2.9.13+dfsg-1ubuntu0.1 libxml2-utils 2.9.13+dfsg-1ubuntu0.1
Ubuntu 21.10: libxml2 2.9.12+dfsg-4ubuntu0.2 libxml2-utils 2.9.12+dfsg-4ubuntu0.2
Ubuntu 20.04 LTS: libxml2 2.9.10+dfsg-5ubuntu0.20.04.3 libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.3
Ubuntu 18.04 LTS: libxml2 2.9.4+dfsg1-6.1ubuntu1.6 libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.6
Ubuntu 16.04 ESM: libxml2 2.9.3+dfsg1-1ubuntu0.7+esm2 libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm2
Ubuntu 14.04 ESM: libxml2 2.9.1+dfsg1-3ubuntu4.13+esm3 libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm3
In general, a standard system update will make all the necessary changes. Apple is aware of a report that this issue may have been actively exploited. CVE-2022-26724: Jorge A. CVE-2022-26765: Linus Henze of Pinauten GmbH (pinauten.de)
LaunchServices Available for: Apple TV 4K, Apple TV 4K (2nd generation), and Apple TV HD Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: An access issue was addressed with additional sandbox restrictions on third-party applications.
Apple TV will periodically check for software updates
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202202-0906",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "tvos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.5"
},
{
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.0"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "manageability software development kit",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.5"
},
{
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.0"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.6.0"
},
{
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "8.6"
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "solidfire\\, enterprise sds \\\u0026 hci storage node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.5"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.6.6"
},
{
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "snapdrive",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core unified data repository",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"model": "zfs storage appliance kit",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.8"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "libxml2",
"scope": "lt",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.9.13"
},
{
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.2"
},
{
"model": "solidfire \\\u0026 hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "mysql workbench",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.29"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.4"
},
{
"model": "communications cloud native core network slice selection function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "2.9.13",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-002:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-004:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-005:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-006:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-008:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-007:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-001:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "10.15.7",
"versionStartIncluding": "10.15.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-003:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.5",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.6",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.5",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "15.5",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "12.4",
"versionStartIncluding": "12.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "11.6.6",
"versionStartIncluding": "11.6.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:snapdrive:-:*:*:*:*:unix:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:oracle:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:manageability_software_development_kit:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_function_cloud_native_environment:22.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.1.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_unified_data_repository:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_slice_selection_function:22.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "8.0.29",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166976"
}
],
"trust": 0.4
},
"cve": "CVE-2022-23308",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-412332",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-23308",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202202-1722",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-412332",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes\nAdvisory ID: RHSA-2022:1081-01\nProduct: Red Hat ACM\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:1081\nIssue date: 2022-03-28\nCVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-12762\n CVE-2020-13435 CVE-2020-14155 CVE-2020-16135\n CVE-2020-24370 CVE-2021-3200 CVE-2021-3445\n CVE-2021-3521 CVE-2021-3580 CVE-2021-3712\n CVE-2021-3800 CVE-2021-3999 CVE-2021-20231\n CVE-2021-20232 CVE-2021-22876 CVE-2021-22898\n CVE-2021-22925 CVE-2021-23177 CVE-2021-28153\n CVE-2021-31566 CVE-2021-33560 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-42574 CVE-2021-43565 CVE-2022-23218\n CVE-2022-23219 CVE-2022-23308 CVE-2022-23806\n CVE-2022-24407\n====================================================================\n1. Summary:\n\nGatekeeper Operator v0.2\n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. \n\nThis advisory contains the container images for Gatekeeper that include\nsecurity updates, and container upgrades. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\nNote: Gatekeeper support from the Red Hat support team is limited cases\nwhere it is integrated and used with Red Hat Advanced Cluster Management\nfor Kubernetes. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity updates:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n- - Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n- - Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. \n\nRefer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/\nfor additional information. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3712\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3999\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-42574\nhttps://access.redhat.com/security/cve/CVE-2021-43565\nhttps://access.redhat.com/security/cve/CVE-2022-23218\nhttps://access.redhat.com/security/cve/CVE-2022-23219\nhttps://access.redhat.com/security/cve/CVE-2022-23308\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 8. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: Use-after-free of ID and IDREF attributes (CVE-2022-23308)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. Package List:\n\nRed Hat Enterprise Linux AppStream (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* nodejs-shelljs: improper privilege management (CVE-2022-0144)\n\n* follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\nBug fix:\n\n* RHACM 2.3.8 images (Bugzilla #2062316)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2062316 - RHACM 2.3.8 images\n\n5. Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-05-16-4 Security Update 2022-004 Catalina\n\nSecurity Update 2022-004 Catalina addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213255. \n\napache\nAvailable for: macOS Catalina\nImpact: Multiple issues in apache\nDescription: Multiple issues were addressed by updating apache to\nversion 2.4.53. \nCVE-2021-44224\nCVE-2021-44790\nCVE-2022-22719\nCVE-2022-22720\nCVE-2022-22721\n\nAppKit\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22665: Lockheed Martin Red Team\n\nAppleGraphicsControl\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day\nInitiative\n\nAppleScript\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-26697: Qi Sun and Robert Ai of Trend Micro\n\nAppleScript\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-26698: Qi Sun of Trend Micro\n\nCoreTypes\nAvailable for: macOS Catalina\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: This issue was addressed with improved checks to prevent\nunauthorized actions. \nCVE-2022-22663: Arsenii Kostromin (0x3c3e)\n\nCVMS\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to gain root privileges\nDescription: A memory initialization issue was addressed. \nCVE-2022-26721: Yonghwi Jin (@jinmo123) of Theori\nCVE-2022-26722: Yonghwi Jin (@jinmo123) of Theori\n\nDriverKit\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges\nDescription: An out-of-bounds access issue was addressed with\nimproved bounds checking. \nCVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)\n\nGraphics Drivers\nAvailable for: macOS Catalina\nImpact: A local user may be able to read kernel memory\nDescription: An out-of-bounds read issue existed that led to the\ndisclosure of kernel memory. This was addressed with improved input\nvalidation. \nCVE-2022-22674: an anonymous researcher\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26720: Liu Long of Ant Security Light-Year Lab\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds read issue was addressed with improved\ninput validation. \nCVE-2022-26770: Liu Long of Ant Security Light-Year Lab\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-26756: Jack Dates of RET2 Systems, Inc\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26769: Antonio Zekic (@antoniozekic)\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-26748: Jeonghoon Shin of Theori working with Trend Micro\nZero Day Initiative\n\nKernel\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26714: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng (@peternguyen14) of STAR Labs\n(@starlabs_sg)\n\nKernel\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-26757: Ned Williamson of Google Project Zero\n\nlibresolv\nAvailable for: macOS Catalina\nImpact: An attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2022-26775: Max Shavrick (@_mxms) of the Google Security Team\n\nLibreSSL\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted certificate may lead to a\ndenial of service\nDescription: A denial of service issue was addressed with improved\ninput validation. \nCVE-2022-0778\n\nlibxml2\nAvailable for: macOS Catalina\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-23308\n\nOpenSSL\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted certificate may lead to a\ndenial of service\nDescription: This issue was addressed with improved checks. \nCVE-2022-0778\n\nPackageKit\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-26727: Mickey Jin (@patch1t)\n\nPrinting\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to bypass Privacy\npreferences\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2022-26746: @gorelics\n\nSecurity\nAvailable for: macOS Catalina\nImpact: A malicious app may be able to bypass signature validation\nDescription: A certificate parsing issue was addressed with improved\nchecks. \nCVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)\n\nSMB\nAvailable for: macOS Catalina\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26715: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng of STAR Labs\n\nSoftwareUpdate\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to access restricted\nfiles\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-26728: Mickey Jin (@patch1t)\n\nTCC\nAvailable for: macOS Catalina\nImpact: An app may be able to capture a user\u0027s screen\nDescription: This issue was addressed with improved checks. \nCVE-2022-26726: an anonymous researcher\n\nTcl\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: This issue was addressed with improved environment\nsanitization. \nCVE-2022-26755: Arsenii Kostromin (0x3c3e)\n\nWebKit\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted mail message may lead to\nrunning arbitrary javascript\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu\nof Palo Alto Networks (paloaltonetworks.com)\n\nWi-Fi\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2022-26761: Wang Yu of Cyberserval\n\nzip\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted file may lead to a denial of\nservice\nDescription: A denial of service issue was addressed with improved\nstate handling. \nCVE-2022-0530\n\nzlib\nAvailable for: macOS Catalina\nImpact: An attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2018-25032: Tavis Ormandy\n\nzsh\nAvailable for: macOS Catalina\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: This issue was addressed by updating to zsh version\n5.8.1. \nCVE-2021-45444\n\nAdditional recognition\n\nPackageKit\nWe would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for\ntheir assistance. \n\nSecurity Update 2022-004 Catalina may be obtained from the Mac App\nStore or Apple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TYACgkQeC9qKD1p\nrhjgGRAAggg84uE4zYtBHmo5Qz45wlY/+FT7bSyCyo2Ta0m3JQmm26UiS9ZzXlD0\n58jCo/ti+gH/gqwU05SnaG88pSMT6VKaDDnmw8WcrPtbl6NN6JX8vaZLFLoGO0dB\nrjwap7ulcLe7/HM8kCz3qqjKj4fusxckCjmm5yBMtuMklq7i51vzkT/+ws00ALcH\n4S821CqIJlS2RIho/M/pih5A/H1Onw/nzKc7VOWjWMmmwoV+oiL4gMPE9kyIAJFQ\nNcZO7s70Qp9N5Z0VGIkD5HkAntEqYGNKJuCQUrHS0fHFUxVrQcuBbbSiv7vwnOT0\nNVcFKBQWJtfcqmtcDF8mVi2ocqUh7So6AXhZGZtL3CrVfNMgTcjq6y5XwzXMgwlm\nezMX73MnV91QuGp6KVZEmoFNlJ2dhKcJ0fYAhhW9DJqvJ1u5xIkQrUkK/ERLnWpE\n9DIapT8uUbb9Zgez/tS9szv5jHhKtOoPbprju7d7LHw7XMFCVKbUvx745dFZx0AG\nPLsJZQNsQZJIK8QdcLA50KrlyjR2ts4nUsKj07I6LR4wUmcaj+goXYq4Nh4WLnoF\nx1AXD5ztdYlhqMcTAnuAbUYfuki0uzSy0p7wBiTknFwKMZNIaiToo64BES+7Iu1i\nvrB9SdtTSQCMXgPZX1Al1e2F/K2ubovrGU9geAEwLMq3AKudI4g=\n=JBHs\n-----END PGP SIGNATURE-----\n\n\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.1 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2020725 - CVE-2021-41771 golang: debug/macho: invalid dynamic symbol table command can cause panic\n2020736 - CVE-2021-41772 golang: archive/zip: Reader.Open panics on empty string\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040378 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [backend]\n2057516 - [MTC UI] UI should not allow PVC mapping for Full migration\n2060244 - [MTC] DIM registry route need to be exposed to create inter-cluster state migration plans\n2060717 - [MTC] Registry pod goes in CrashLoopBackOff several times when MCG Nooba is used as the Replication Repository\n2061347 - [MTC] Log reader pod is missing velero and restic pod logs. \n2061653 - [MTC UI] Migration Resources section showing pods from other namespaces\n2062682 - [MTC] Destination storage class non-availability warning visible in Intra-cluster source to source state-migration migplan. \n2065837 - controller_config.yml.j2 merge type should be set to merge (currently using the default strategic)\n2071000 - Storage Conversion: UI doesn\u0027t have the ability to skip PVC\n2072036 - Migration plan for storage conversion cannot be created if there\u0027s no replication repository\n2072186 - Wrong migration type description\n2072684 - Storage Conversion: PersistentVolumeClaimTemplates in StatefulSets are not updated automatically after migration\n2073496 - Errors in rsync pod creation are not printed in the controller logs\n2079814 - [MTC UI] Intra-cluster state migration plan showing a warning on PersistentVolumes page\n\n5. ==========================================================================\nUbuntu Security Notice USN-5422-1\nMay 16, 2022\n\nlibxml2 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in libxml2. This issue only\naffected Ubuntu 14.04 ESM, and Ubuntu 16.04 ESM. (CVE-2022-23308)\n\nIt was discovered that libxml2 incorrectly handled certain XML files. \nAn attacker could possibly use this issue to cause a crash or execute\narbitrary code. (CVE-2022-29824)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libxml2 2.9.13+dfsg-1ubuntu0.1\n libxml2-utils 2.9.13+dfsg-1ubuntu0.1\n\nUbuntu 21.10:\n libxml2 2.9.12+dfsg-4ubuntu0.2\n libxml2-utils 2.9.12+dfsg-4ubuntu0.2\n\nUbuntu 20.04 LTS:\n libxml2 2.9.10+dfsg-5ubuntu0.20.04.3\n libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.3\n\nUbuntu 18.04 LTS:\n libxml2 2.9.4+dfsg1-6.1ubuntu1.6\n libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.6\n\nUbuntu 16.04 ESM:\n libxml2 2.9.3+dfsg1-1ubuntu0.7+esm2\n libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm2\n\nUbuntu 14.04 ESM:\n libxml2 2.9.1+dfsg1-3ubuntu4.13+esm3\n libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm3\n\nIn general, a standard system update will make all the necessary changes. Apple is aware of a report that this issue may\nhave been actively exploited. \nCVE-2022-26724: Jorge A. \nCVE-2022-26765: Linus Henze of Pinauten GmbH (pinauten.de)\n\nLaunchServices\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation), and Apple\nTV HD\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: An access issue was addressed with additional sandbox\nrestrictions on third-party applications. \n\nApple TV will periodically check for software updates",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-23308"
},
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "167193"
},
{
"db": "PACKETSTORM",
"id": "167189"
},
{
"db": "PACKETSTORM",
"id": "166976"
},
{
"db": "PACKETSTORM",
"id": "167184"
},
{
"db": "PACKETSTORM",
"id": "167194"
}
],
"trust": 1.71
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-412332",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-23308",
"trust": 2.5
},
{
"db": "PACKETSTORM",
"id": "167194",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166327",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "167008",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166437",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168719",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166304",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.2569",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1263",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1677",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0927",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1051",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2411",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4099",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1073",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5782",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3672",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166803",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051708",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031503",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051713",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042138",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072710",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072053",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032843",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072640",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022041523",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051839",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051326",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022030110",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031620",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031525",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032445",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022053128",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167189",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167184",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167193",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166431",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166433",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167188",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167185",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167186",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-412332",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166489",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166516",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166976",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "167193"
},
{
"db": "PACKETSTORM",
"id": "167189"
},
{
"db": "PACKETSTORM",
"id": "166976"
},
{
"db": "PACKETSTORM",
"id": "167184"
},
{
"db": "PACKETSTORM",
"id": "167194"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"id": "VAR-202202-0906",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T19:35:48.751000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "libxml2 Remediation of resource management error vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=184325"
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-416",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.7,
"url": "https://github.com/gnome/libxml2/commit/652dd12a858989b14eed4e84e453059cd3ba340e"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220331-0008/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213253"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213254"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213255"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213256"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213257"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213258"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/may/34"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/may/38"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/may/35"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/may/33"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/may/36"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/may/37"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202210-03"
},
{
"trust": 1.7,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/blob/v2.9.13/news"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2022/04/msg00004.html"
},
{
"trust": 1.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23308"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/cve/cve-2022-23308"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/la3mwwayzadwj5f6joubx65uzamqb7rf/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/la3mwwayzadwj5f6joubx65uzamqb7rf/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051713"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2569"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072710"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051839"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1051"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1073"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072053"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4099"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5782"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166803/red-hat-security-advisory-2022-1390-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/libxml2-five-vulnerabilities-37614"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032843"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166304/ubuntu-security-notice-usn-5324-1.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022053128"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167194/apple-security-advisory-2022-05-16-6.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2411"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032445"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051326"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-23308/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1263"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072640"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051708"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042138"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022041523"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168719/gentoo-linux-security-advisory-202210-03.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022030110"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0927"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213254"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3672"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031503"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031525"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167008/red-hat-security-advisory-2022-1747-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166327/red-hat-security-advisory-2022-0899-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166437/red-hat-security-advisory-2022-1039-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031620"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-23219"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3999"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-23218"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26714"
},
{
"trust": 0.3,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.3,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0261"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25315"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22824"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22823"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22822"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22827"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-46143"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0392"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22825"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25235"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45960"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22826"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0318"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0413"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0359"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25236"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26726"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26766"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26702"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26764"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26745"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26765"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26757"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22675"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26706"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26763"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26711"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26768"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1081"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0899"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0155"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1083"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0920"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0435"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0144"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22942"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4154"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26771"
},
{
"trust": 0.1,
"url": "https://support.apple.com/kb/ht204641"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213253."
},
{
"trust": 0.1,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213255."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22663"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44790"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22674"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0530"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44224"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22719"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26727"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26728"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26697"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26748"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45444"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26720"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22720"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22665"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26715"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26722"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26746"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22826"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25636"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4028"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/migration_toolkit_for_containers/mtc-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1734"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41772"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.4+dfsg1-6.1ubuntu1.6"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5422-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.10+dfsg-5ubuntu0.20.04.3"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.12+dfsg-4ubuntu0.2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.13+dfsg-1ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26701"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26738"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26740"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213254."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26736"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26737"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26724"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26739"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "167193"
},
{
"db": "PACKETSTORM",
"id": "167189"
},
{
"db": "PACKETSTORM",
"id": "166976"
},
{
"db": "PACKETSTORM",
"id": "167184"
},
{
"db": "PACKETSTORM",
"id": "167194"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "167193"
},
{
"db": "PACKETSTORM",
"id": "167189"
},
{
"db": "PACKETSTORM",
"id": "166976"
},
{
"db": "PACKETSTORM",
"id": "167184"
},
{
"db": "PACKETSTORM",
"id": "167194"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-02-26T00:00:00",
"db": "VULHUB",
"id": "VHN-412332"
},
{
"date": "2022-03-28T15:52:16",
"db": "PACKETSTORM",
"id": "166489"
},
{
"date": "2022-03-16T16:44:24",
"db": "PACKETSTORM",
"id": "166327"
},
{
"date": "2022-03-29T15:53:19",
"db": "PACKETSTORM",
"id": "166516"
},
{
"date": "2022-05-17T17:06:32",
"db": "PACKETSTORM",
"id": "167193"
},
{
"date": "2022-05-17T16:59:55",
"db": "PACKETSTORM",
"id": "167189"
},
{
"date": "2022-05-05T17:35:22",
"db": "PACKETSTORM",
"id": "166976"
},
{
"date": "2022-05-17T16:57:29",
"db": "PACKETSTORM",
"id": "167184"
},
{
"date": "2022-05-17T17:06:48",
"db": "PACKETSTORM",
"id": "167194"
},
{
"date": "2022-02-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"date": "2022-02-26T05:15:08.280000",
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-11-02T00:00:00",
"db": "VULHUB",
"id": "VHN-412332"
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"date": "2023-11-07T03:44:08.253000",
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "libxml2 Resource Management Error Vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
],
"trust": 0.6
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
],
"trust": 0.6
}
}
VAR-202206-1961
Vulnerability from variot - Updated: 2024-07-23 19:33When curl < 7.84.0 does FTP transfers secured by krb5, it handles message verification failures wrongly. This flaw makes it possible for a Man-In-The-Middle attack to go unnoticed and even allows it to inject data to the client. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: curl security update Advisory ID: RHSA-2022:6159-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:6159 Issue date: 2022-08-24 CVE Names: CVE-2022-32206 CVE-2022-32208 ==================================================================== 1. Summary:
An update for curl is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: HTTP compression denial of service (CVE-2022-32206)
-
curl: FTP-KRB bad message verification (CVE-2022-32208)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification
- Package List:
Red Hat Enterprise Linux BaseOS (v. 8):
Source: curl-7.61.1-22.el8_6.4.src.rpm
aarch64: curl-7.61.1-22.el8_6.4.aarch64.rpm curl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm curl-debugsource-7.61.1-22.el8_6.4.aarch64.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm libcurl-7.61.1-22.el8_6.4.aarch64.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm libcurl-devel-7.61.1-22.el8_6.4.aarch64.rpm libcurl-minimal-7.61.1-22.el8_6.4.aarch64.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm
ppc64le: curl-7.61.1-22.el8_6.4.ppc64le.rpm curl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm curl-debugsource-7.61.1-22.el8_6.4.ppc64le.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-devel-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-minimal-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm
s390x: curl-7.61.1-22.el8_6.4.s390x.rpm curl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm curl-debugsource-7.61.1-22.el8_6.4.s390x.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm libcurl-7.61.1-22.el8_6.4.s390x.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm libcurl-devel-7.61.1-22.el8_6.4.s390x.rpm libcurl-minimal-7.61.1-22.el8_6.4.s390x.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm
x86_64: curl-7.61.1-22.el8_6.4.x86_64.rpm curl-debuginfo-7.61.1-22.el8_6.4.i686.rpm curl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm curl-debugsource-7.61.1-22.el8_6.4.i686.rpm curl-debugsource-7.61.1-22.el8_6.4.x86_64.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm libcurl-7.61.1-22.el8_6.4.i686.rpm libcurl-7.61.1-22.el8_6.4.x86_64.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.i686.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm libcurl-devel-7.61.1-22.el8_6.4.i686.rpm libcurl-devel-7.61.1-22.el8_6.4.x86_64.rpm libcurl-minimal-7.61.1-22.el8_6.4.i686.rpm libcurl-minimal-7.61.1-22.el8_6.4.x86_64.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32206 https://access.redhat.com/security/cve/CVE-2022-32208 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwa9b9zjgjWX9erEAQi1rQ/+Kw4R4cPAIlGUx4vJwSMw8zwCDxnLviV+ YgCpaCuUwCkWI9hrAQNC1O5i2MSl7j8jI9dt0Oe770VwNIZPzJMK8MX96zYdeOsg EiuwTW5KTWKwCeAvPt6ydVji9R0N7FMDBxmdi1aE8gBt8J6pIwp4ozrR4jXiXCjB dQJlc2kf7YXDiengte1jpXNCFh2ar9t8lqmW53Hu05zR8VFdAPk6NM1kTIploICN blR9t80TbWouBvN2A6gIZ0ZWnbJOY9odCBHdo5ay8kufmQC0K9QKb7jyoaUUHVau 5/HVbncd7bFQuyu+yGoOxU1TCxwee3B9LAmR4uzDdJcaTxPgvK2cyskdTVz+9N9k nJLDYGaL7UNC7YkbByN58VC6fdGsnn8QIXHg7ICTgdhYiPZ3uP5JUiDrAGKKb/v+ XPtwYHuh6yX0OfS0JqFEMjR0P1rFLiuDNBOPBDiTV2mBVd+7kiNTs1izUDGwQeFd VaNNNU4kpD3FGOgRwxIAKz2qCX+Ody8goBeJJPGcVlmDp025ZrMisl1QC8/3eTas ML+TSvTeaSY/I35uPzKsoh1f+/lAwUsB54I6NxHH3vWYryievuSdpjtNsQInACjw owX+pU5CfOwdD56Hqdhb7fjuJVufo6VC8b0zy/vSZYnNt0cfojXA73F3B1K5+XcF bBkTeh+fqsg=powM -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
OpenShift sandboxed containers 1.3.1 is now available. Description:
OpenShift sandboxed containers support for OpenShift Container Platform provides users with built-in support for running Kata containers as an additional, optional runtime.
Space precludes documenting all of the updates to OpenShift sandboxed containers in this advisory. Bugs fixed (https://bugzilla.redhat.com/):
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2118556 - CVE-2022-2832 blender: Null pointer reference in blender thumbnail extractor
- JIRA issues fixed (https://issues.jboss.org/):
KATA-1751 - CVE-2022-24675 osc-operator-container: golang: encoding/pem: fix stack overflow in Decode [rhosc-1] KATA-1752 - CVE-2022-28327 osc-operator-container: golang: crypto/elliptic: panic caused by oversized scalar [rhosc-1] KATA-1754 - OSC Pod security issue in 4.12 prevents subscribing to operator KATA-1758 - CVE-2022-30632 osc-operator-container: golang: path/filepath: stack exhaustion in Glob [rhosc-1]
- ========================================================================== Ubuntu Security Notice USN-5495-1 June 27, 2022
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in curl. An attacker could possibly use this issue to cause a denial of service. This issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32205)
Harry Sintonen discovered that curl incorrectly handled certain HTTP compressions. An attacker could possibly use this issue to cause a denial of service. (CVE-2022-32206)
Harry Sintonen incorrectly handled certain file permissions. An attacker could possibly use this issue to expose sensitive information. This issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207)
Harry Sintonen discovered that curl incorrectly handled certain FTP-KRB messages. An attacker could possibly use this to perform a machine-in-the-diddle attack. (CVE-2022-32208)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: curl 7.81.0-1ubuntu1.3 libcurl3-gnutls 7.81.0-1ubuntu1.3 libcurl3-nss 7.81.0-1ubuntu1.3 libcurl4 7.81.0-1ubuntu1.3
Ubuntu 21.10: curl 7.74.0-1.3ubuntu2.3 libcurl3-gnutls 7.74.0-1.3ubuntu2.3 libcurl3-nss 7.74.0-1.3ubuntu2.3 libcurl4 7.74.0-1.3ubuntu2.3
Ubuntu 20.04 LTS: curl 7.68.0-1ubuntu2.12 libcurl3-gnutls 7.68.0-1ubuntu2.12 libcurl3-nss 7.68.0-1ubuntu2.12 libcurl4 7.68.0-1ubuntu2.12
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.19 libcurl3-gnutls 7.58.0-2ubuntu3.19 libcurl3-nss 7.58.0-2ubuntu3.19 libcurl4 7.58.0-2ubuntu3.19
In general, a standard system update will make all the necessary changes. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.5.2 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
- moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
- vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fixes:
-
Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters (BZ# 2074547)
-
OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constrain (BZ# 2082254)
-
subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec (BZ# 2083659)
-
Yaml editor for creating vSphere cluster moves to next line after typing (BZ# 2086883)
-
Submariner addon status doesn't track all deployment failures (BZ# 2090311)
-
Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret (BZ# 2091170)
-
After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors (BZ# 2095481)
-
Enforce failed and report the violation after modified memory value in limitrange policy (BZ# 2100036)
-
Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" (BZ# 2101577)
-
Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies (BZ# 2102273)
-
managed cluster is in "unknown" state for 120 mins after OADP restore
-
RHACM 2.5.2 images (BZ# 2104553)
-
Subscription UI does not allow binding to label with empty value (BZ# 2104961)
-
Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ# 2106069)
-
Region information is not available for Azure cloud in managedcluster CR (BZ# 2107134)
-
cluster uninstall log points to incorrect container name (BZ# 2107359)
-
ACM shows wrong path for Argo CD applicationset git generator (BZ# 2107885)
-
Single node checkbox not visible for 4.11 images (BZ# 2109134)
-
Unable to deploy hypershift cluster when enabling validate-cluster-security (BZ# 2109544)
-
Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application (BZ# 20110026)
-
After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating (BZ# 2117728)
-
pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)
-
ArgoCD and AppSet Applications do not deploy to local-cluster (BZ# 2124707)
-
Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation, which will be updated shortly for this release, for important instructions about installing this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing
- Bugs fixed (https://bugzilla.redhat.com/):
2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters 2082254 - OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint 2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec 2086883 - Yaml editor for creating vSphere cluster moves to next line after typing 2090311 - Submariner addon status doesn't track all deployment failures 2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret 2095481 - After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors 2100036 - Enforce failed and report the violation after modified memory value in limitrange policy 2101577 - Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" 2102273 - Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies 2103653 - managed cluster is in "unknown" state for 120 mins after OADP restore 2104553 - RHACM 2.5.2 images 2104961 - Subscription UI does not allow binding to label with empty value 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD 2107134 - Region information is not available for Azure cloud in managedcluster CR 2107359 - cluster uninstall log points to incorrect container name 2107885 - ACM shows wrong path for Argo CD applicationset git generator 2109134 - Single node checkbox not visible for 4.11 images 2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application 2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating 2122292 - pods in CrashLoopBackoff on 3.11 managed cluster 2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
- Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- Description:
OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform. This advisory contains the following OpenShift Virtualization 4.12.0 images:
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
golang: net/http: improper sanitization of Transfer-Encoding header (CVE-2022-1705)
-
golang: go/parser: stack exhaustion in all Parse* functions (CVE-2022-1962)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)
-
golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
-
golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)
-
golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working (CVE-2022-32148)
-
golang: crypto/tls: session tickets lack random ticket_age_add (CVE-2022-30629)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
RHEL-8-CNV-4.12
============= bridge-marker-container-v4.12.0-24 cluster-network-addons-operator-container-v4.12.0-24 cnv-containernetworking-plugins-container-v4.12.0-24 cnv-must-gather-container-v4.12.0-58 hco-bundle-registry-container-v4.12.0-769 hostpath-csi-driver-container-v4.12.0-30 hostpath-provisioner-container-v4.12.0-30 hostpath-provisioner-operator-container-v4.12.0-31 hyperconverged-cluster-operator-container-v4.12.0-96 hyperconverged-cluster-webhook-container-v4.12.0-96 kubemacpool-container-v4.12.0-24 kubevirt-console-plugin-container-v4.12.0-182 kubevirt-ssp-operator-container-v4.12.0-64 kubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55 kubevirt-tekton-tasks-copy-template-container-v4.12.0-55 kubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55 kubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55 kubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55 kubevirt-tekton-tasks-operator-container-v4.12.0-40 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55 kubevirt-template-validator-container-v4.12.0-32 libguestfs-tools-container-v4.12.0-255 ovs-cni-marker-container-v4.12.0-24 ovs-cni-plugin-container-v4.12.0-24 virt-api-container-v4.12.0-255 virt-artifacts-server-container-v4.12.0-255 virt-cdi-apiserver-container-v4.12.0-72 virt-cdi-cloner-container-v4.12.0-72 virt-cdi-controller-container-v4.12.0-72 virt-cdi-importer-container-v4.12.0-72 virt-cdi-operator-container-v4.12.0-72 virt-cdi-uploadproxy-container-v4.12.0-71 virt-cdi-uploadserver-container-v4.12.0-72 virt-controller-container-v4.12.0-255 virt-exportproxy-container-v4.12.0-255 virt-exportserver-container-v4.12.0-255 virt-handler-container-v4.12.0-255 virt-launcher-container-v4.12.0-255 virt-operator-container-v4.12.0-255 virtio-win-container-v4.12.0-10 vm-network-latency-checkup-container-v4.12.0-89
- Bugs fixed (https://bugzilla.redhat.com/):
1719190 - Unable to cancel live-migration if virt-launcher pod in pending state
2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2040377 - Unable to delete failed VMIM after VM deleted
2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed
2052556 - Metric "kubevirt_num_virt_handlers_by_node_running_virt_launcher" reporting incorrect value
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2060499 - [RFE] Cannot add additional service (or other objects) to VM template
2069098 - Large scale |VMs migration is slow due to low migration parallelism
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2071491 - Storage Throughput metrics are incorrect in Overview
2072797 - Metrics in Virtualization -> Overview period is not clear or configurable
2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers
2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode
2086551 - Min CPU feature found in labels
2087724 - Default template show no boot source even there are auto-upload boot sources
2088129 - [SSP] webhook does not comply with restricted security context
2088464 - [CDI] cdi-deployment does not comply with restricted security context
2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR
2089744 - HCO should label its control plane namespace to admit pods at privileged security level
2089751 - 4.12.0 containers
2089804 - 4.12.0 rpms
2091856 - ?Edit BootSource? action should have more explicit information when disabled
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer
2093771 - The disk source should be PVC if the template has no auto-update boot source
2093996 - kubectl get vmi API should always return primary interface if exist
2094202 - Cloud-init username field should have hint
2096285 - KubeVirt CR API documentation is missing docs for many fields
2096780 - [RFE] Add ssh-key and sysprep to template scripts tab
2097436 - Online disk expansion ignores filesystem overhead change
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2099556 - [RFE] Add option to enable RDP service for windows vm
2099573 - [RFE] Improve template's message about not editable
2099923 - [RFE] Merge "SSH access" and "SSH command" into one
2100290 - Error is not dismissed on catalog review page
2100436 - VM list filtering ignores VMs in error-states
2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2100629 - Update nested support KBASE article
2100679 - The number of hardware devices is not correct in vm overview tab
2100682 - All hardware devices get deleted while just delete one
2100684 - Workload profile are not editable during creation and after creation
2101144 - VM filter has two "Other" checkboxes which are triggered together
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101167 - Edit buttons clickable area is too large.
2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state
2101390 - Easy to miss the "tick" when adding GPU device to vm via UI
2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2101423 - wrong user name on using ignition
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101445 - "Pending changes - Boot Order"
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101499 - Cannot add NIC to VM template as non-priv user
2101501 - NAME parameter in VM template has no effect.
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101667 - VMI view is not aligned with vm and tempates
2101681 - All templates are labeling "source available" in template list page
2102074 - VM Creation time on VM Overview Details card lacks string
2102125 - vm clone modal is displaying DV size instead of PVC size
2102132 - align the utilization card of single VM overview with the design
2102138 - Should the word "new" be removed from "Create new VirtualMachine from catalog"?
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102475 - Template 'vm-template-example' should be filtered by 'Fedora' rather than 'Other'
2102561 - sysprep-info should link to downstream doc
2102737 - Clone a VM should lead to vm overview tab
2102740 - "Save" button on vm clone modal should be "Clone"
2103806 - "404: Not Found" appears shortly by clicking the PVC link on vm disk tab
2103807 - PVC is not named by VM name while creating vm quickly
2103817 - Workload profile values in vm details should align with template's value
2103844 - VM nic model is empty
2104331 - VM list page scroll up automatically
2104402 - VM create button is not enabled while adding multiple environment disks
2104422 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2104424 - Enable descheduler or hide it on template's scheduling tab
2104479 - [4.12] Cloned VM's snapshot restore fails if the source VM disk is deleted
2104480 - Alerts in VM overview tab disappeared after a few seconds
2104785 - "Add disk" and "Disks" are on the same line
2104859 - [RFE] Add "Copy SSH command" to VM action list
2105257 - Can't set log verbosity level for virt-operator pod
2106175 - All pages are crashed after visit Virtualization -> Overview
2106963 - Cannot add configmap for windows VM
2107279 - VM Template's bootable disk can be marked as bootable
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse functions
2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
2108339 - datasource does not provide timestamp when updated
2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2109818 - Upstream metrics documentation is not detailed enough
2109975 - DataVolume fails to import "cirros-container-disk-demo" image
2110256 - Storage -> PVC -> upload data, does not support source reference
2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls
2111240 - GiB changes to B in Template's Edit boot source reference modal
2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111328 - kubevirt plugin console crashed after visit vmi page
2111378 - VM SSH command generated by UI points at api VIP
2111744 - Cloned template should not label app.kubernetes.io/name: common-templates
2111794 - the virtlogd process is taking too much RAM! (17468Ki > 17Mi)
2112900 - button style are different
2114516 - Nothing happens after clicking on Fedora cloud image list link
2114636 - The style of displayed items are not unified on VM tabs
2114683 - VM overview tab is crashed just after the vm is created
2115257 - Need to Change system-product-name to "OpenShift Virtualization" in CNV-4.12
2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass
2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items
2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates
2116225 - The filter keyword of the related operator 'Openshift Data Foundation' is 'OCS' rather than 'ODF'
2116644 - Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found"
2117549 - Cannot edit cloud-init data after add ssh key
2117803 - Cannot edit ssh even vm is stopped
2117813 - Improve descriptive text of VM details while VM is off
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
2118257 - outdated doc link tolerations modal
2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format
2119069 - Unable to start windows VMs on PSI setups
2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2119309 - readinessProbe in VM stays on failed
2119615 - Change the disk size causes the unit changed
2120907 - Cannot filter disks by label
2121320 - Negative values in migration metrics
2122236 - Failing to delete HCO with SSP sticking around
2122990 - VMExport should check APIGroup
2124147 - "ReadOnlyMany" should not be added to supported values in memory dump
2124307 - Ui crash/stuck on loading when trying to detach disk on a VM
2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it
2124555 - View documentation link on MigrationPolicies page des not work
2124557 - MigrationPolicy description is not displayed on Details page
2124558 - Non-privileged user can start MigrationPolicy creation
2124565 - Deleted DataSource reappears in list
2124572 - First annotation can not be added to DataSource
2124582 - Filtering VMs by OS does not work
2124594 - Docker URL validation is inconsistent over application
2124597 - Wrong case in Create DataSource menu
2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile
2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state
2127787 - Expose the PVC source of the dataSource on UI
2127843 - UI crashed by selecting "Live migration network"
2127931 - Change default time range on Virtualization -> Overview -> Monitoring dashboard to 30 minutes
2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer
2128002 - Error after VM template deletion
2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards
2128872 - [4.11]Can't restore cloned VM
2128948 - Cannot create DataSource from default YAML
2128949 - Cannot create MigrationPolicy from example YAML
2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2129013 - Mark Windows 11 as TechPreview
2129234 - Service is not deleted along with the VM when the VM is created from a template with service
2129301 - Cloud-init network data don't wipe out on uncheck checkbox 'Add network data'
2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook
2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV
2130588 - crypto-policy : Common Ciphers support by apiserver and hco
2130695 - crypto-policy : Logging Improvement and publish the source of ciphers
2130909 - Non-privileged user can start DataSource creation
2131157 - KV data transfer rate chart in VM Metrics tab is not displayed
2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough
2131674 - Bump virtlogd memory requirement to 20Mi
2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11
2132682 - Default YAML entity name convention.
2132721 - Delete dialogs
2132744 - Description text is missing in Live Migrations section
2132746 - Background is broken in Virtualization Monitoring page
2132783 - VM can not be created from Template with edited boot source
2132793 - Edited Template BSR is not saved
2132932 - Typo in PVC size units menu
2133540 - [pod security violation audit] Audit violation in "cni-plugins" container should be fixed
2133541 - [pod security violation audit] Audit violation in "bridge-marker" container should be fixed
2133542 - [pod security violation audit] Audit violation in "manager" container should be fixed
2133543 - [pod security violation audit] Audit violation in "kube-rbac-proxy" container should be fixed
2133655 - [pod security violation audit] Audit violation in "cdi-operator" container should be fixed
2133656 - [4.12][pod security violation audit] Audit violation in "hostpath-provisioner-operator" container should be fixed
2133659 - [pod security violation audit] Audit violation in "cdi-controller" container should be fixed
2133660 - [pod security violation audit] Audit violation in "cdi-source-update-poller" container should be fixed
2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod
2134672 - [e2e] add data-test-id for catalog -> storage section
2134825 - Authorization for expand-spec endpoint missing
2135805 - Windows 2022 template is missing vTPM and UEFI params in spec
2136051 - Name jumping when trying to create a VM with source from catalog
2136425 - Windows 11 is detected as Windows 10
2136534 - Not possible to specify a TTL on VMExports
2137123 - VMExport: export pod is not PSA complaint
2137241 - Checkbox about delete vm disks is not loaded while deleting VM
2137243 - registery input add docker prefix twice
2137349 - "Manage source" action infinitely loading on DataImportCron details page
2137591 - Inconsistent dialog headings/titles
2137731 - Link of VM status in overview is not working
2137733 - No link for VMs in error status in "VirtualMachine statuses" card
2137736 - The column name "MigrationPolicy name" can just be "Name"
2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly
2138112 - Unsupported S3 endpoint option in Add disk modal
2138119 - "Customize VirtualMachine" flow is not user-friendly because settings are split into 2 modals
2138199 - Win11 and Win22 templates are not filtered properly by Template provider
2138653 - Saving Template prameters reloads the page
2138657 - Setting DATA_SOURCE_ Template parameters makes VM creation fail
2138664 - VM that was created with SSH key fails to start
2139257 - Cannot add disk via "Using an existing PVC"
2139260 - Clone button is disabled while VM is running
2139293 - Non-admin user cannot load VM list page
2139296 - Non-admin cannot load MigrationPolicies page
2139299 - No auto-generated VM name while creating VM by non-admin user
2139306 - Non-admin cannot create VM via customize mode
2139479 - virtualization overview crashes for non-priv user
2139574 - VM name gets "emptyname" if click the create button quickly
2139651 - non-priv user can click create when have no permissions
2139687 - catalog shows template list for non-priv users
2139738 - [4.12]Can't restore cloned VM
2139820 - non-priv user cant reach vm details
2140117 - Provide upgrade path from 4.11.1->4.12.0
2140521 - Click the breadcrumb list about "VirtualMachines" goes to undefined project
2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user
2140627 - Not able to select storageClass if there is no default storageclass defined
2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user
2140808 - Hyperv feature set to "enabled: false" prevents scheduling
2140977 - Alerts number is not correct on Virtualization overview
2140982 - The base template of cloned template is "Not available"
2140998 - Incorrect information shows in overview page per namespace
2141089 - Unable to upload boot images.
2141302 - Unhealthy states alerts and state metrics are missing
2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations
2141494 - "Start in pause mode" option is not available while creating the VM
2141654 - warning log appearing on VMs: found no SR-IOV networks
2141711 - Node column selector is redundant for non-priv user
2142468 - VM action "Stop" should not be disabled when VM in pause state
2142470 - Delete a VM or template from all projects leads to 404 error
2142511 - Enhance alerts card in overview
2142647 - Error after MigrationPolicy deletion
2142891 - VM latency checkup: Failed to create the checkup's Job
2142929 - Permission denied when try get instancestypes
2143268 - Topolvm storageProfile missing accessModes and volumeMode
2143498 - Could not load template while creating VM from catalog
2143964 - Could not load template while creating VM from catalog
2144580 - "?" icon is too big in VM Template Disk tab
2144828 - "?" icon is too big in VM Template Disk tab
2144839 - Alerts number is not correct on Virtualization overview
2153849 - After upgrade to 4.11.1->4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten
2155757 - Incorrect upstream-version label "v1.6.0-unstable-410-g09ea881c" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202206-1961",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "13.0"
},
{
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.16.4"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.84.0"
},
{
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.84.0",
"versionStartIncluding": "7.16.4",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "13.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.0.6",
"versionStartIncluding": "9.0.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.2.12",
"versionStartIncluding": "8.2.0",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "170741"
}
],
"trust": 0.6
},
"cve": "CVE-2022-32208",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"integrityImpact": "NONE",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": false,
"vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 8.6,
"id": "VHN-424135",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 5.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 2.2,
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2022-32208",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-424135",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "When curl \u003c 7.84.0 does FTP transfers secured by krb5, it handles message verification failures wrongly. This flaw makes it possible for a Man-In-The-Middle attack to go unnoticed and even allows it to inject data to the client. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: curl security update\nAdvisory ID: RHSA-2022:6159-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6159\nIssue date: 2022-08-24\nCVE Names: CVE-2022-32206 CVE-2022-32208\n====================================================================\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: HTTP compression denial of service (CVE-2022-32206)\n\n* curl: FTP-KRB bad message verification (CVE-2022-32208)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\ncurl-7.61.1-22.el8_6.4.src.rpm\n\naarch64:\ncurl-7.61.1-22.el8_6.4.aarch64.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.aarch64.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\n\nppc64le:\ncurl-7.61.1-22.el8_6.4.ppc64le.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.ppc64le.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\n\ns390x:\ncurl-7.61.1-22.el8_6.4.s390x.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.s390x.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\n\nx86_64:\ncurl-7.61.1-22.el8_6.4.x86_64.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.i686.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.i686.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.x86_64.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32206\nhttps://access.redhat.com/security/cve/CVE-2022-32208\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwa9b9zjgjWX9erEAQi1rQ/+Kw4R4cPAIlGUx4vJwSMw8zwCDxnLviV+\nYgCpaCuUwCkWI9hrAQNC1O5i2MSl7j8jI9dt0Oe770VwNIZPzJMK8MX96zYdeOsg\nEiuwTW5KTWKwCeAvPt6ydVji9R0N7FMDBxmdi1aE8gBt8J6pIwp4ozrR4jXiXCjB\ndQJlc2kf7YXDiengte1jpXNCFh2ar9t8lqmW53Hu05zR8VFdAPk6NM1kTIploICN\nblR9t80TbWouBvN2A6gIZ0ZWnbJOY9odCBHdo5ay8kufmQC0K9QKb7jyoaUUHVau\n5/HVbncd7bFQuyu+yGoOxU1TCxwee3B9LAmR4uzDdJcaTxPgvK2cyskdTVz+9N9k\nnJLDYGaL7UNC7YkbByN58VC6fdGsnn8QIXHg7ICTgdhYiPZ3uP5JUiDrAGKKb/v+\nXPtwYHuh6yX0OfS0JqFEMjR0P1rFLiuDNBOPBDiTV2mBVd+7kiNTs1izUDGwQeFd\nVaNNNU4kpD3FGOgRwxIAKz2qCX+Ody8goBeJJPGcVlmDp025ZrMisl1QC8/3eTas\nML+TSvTeaSY/I35uPzKsoh1f+/lAwUsB54I6NxHH3vWYryievuSdpjtNsQInACjw\nowX+pU5CfOwdD56Hqdhb7fjuJVufo6VC8b0zy/vSZYnNt0cfojXA73F3B1K5+XcF\nbBkTeh+fqsg=powM\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nOpenShift sandboxed containers 1.3.1 is now available. Description:\n\nOpenShift sandboxed containers support for OpenShift Container Platform\nprovides users with built-in support for running Kata containers as an\nadditional, optional runtime. \n\nSpace precludes documenting all of the updates to OpenShift sandboxed\ncontainers in this advisory. Bugs fixed (https://bugzilla.redhat.com/):\n\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2118556 - CVE-2022-2832 blender: Null pointer reference in blender thumbnail extractor\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nKATA-1751 - CVE-2022-24675 osc-operator-container: golang: encoding/pem: fix stack overflow in Decode [rhosc-1]\nKATA-1752 - CVE-2022-28327 osc-operator-container: golang: crypto/elliptic: panic caused by oversized scalar [rhosc-1]\nKATA-1754 - OSC Pod security issue in 4.12 prevents subscribing to operator\nKATA-1758 - CVE-2022-30632 osc-operator-container: golang: path/filepath: stack exhaustion in Glob [rhosc-1]\n\n6. ==========================================================================\nUbuntu Security Notice USN-5495-1\nJune 27, 2022\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in curl. \nAn attacker could possibly use this issue to cause a denial of service. \nThis issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32205)\n\nHarry Sintonen discovered that curl incorrectly handled certain HTTP compressions. \nAn attacker could possibly use this issue to cause a denial of service. \n(CVE-2022-32206)\n\nHarry Sintonen incorrectly handled certain file permissions. \nAn attacker could possibly use this issue to expose sensitive information. \nThis issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207)\n\nHarry Sintonen discovered that curl incorrectly handled certain FTP-KRB messages. \nAn attacker could possibly use this to perform a machine-in-the-diddle attack. \n(CVE-2022-32208)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n curl 7.81.0-1ubuntu1.3\n libcurl3-gnutls 7.81.0-1ubuntu1.3\n libcurl3-nss 7.81.0-1ubuntu1.3\n libcurl4 7.81.0-1ubuntu1.3\n\nUbuntu 21.10:\n curl 7.74.0-1.3ubuntu2.3\n libcurl3-gnutls 7.74.0-1.3ubuntu2.3\n libcurl3-nss 7.74.0-1.3ubuntu2.3\n libcurl4 7.74.0-1.3ubuntu2.3\n\nUbuntu 20.04 LTS:\n curl 7.68.0-1ubuntu2.12\n libcurl3-gnutls 7.68.0-1ubuntu2.12\n libcurl3-nss 7.68.0-1ubuntu2.12\n libcurl4 7.68.0-1ubuntu2.12\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.19\n libcurl3-gnutls 7.58.0-2ubuntu3.19\n libcurl3-nss 7.58.0-2ubuntu3.19\n libcurl4 7.58.0-2ubuntu3.19\n\nIn general, a standard system update will make all the necessary changes. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.2 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes:\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* Submariner Globalnet e2e tests failed on MTU between On-Prem to Public\nclusters (BZ# 2074547)\n\n* OCP 4.11 - Install fails because of: pods\n\"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate\nagainst any security context constrain (BZ# 2082254)\n\n* subctl gather fails to gather libreswan data if CableDriver field is\nmissing/empty in Submariner Spec (BZ# 2083659)\n\n* Yaml editor for creating vSphere cluster moves to next line after typing\n(BZ# 2086883)\n\n* Submariner addon status doesn\u0027t track all deployment failures (BZ#\n2090311)\n\n* Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn\nwithout including s3 secret (BZ# 2091170)\n\n* After switching to ACM 2.5 the managed clusters log \"unable to create\nClusterClaim\" errors (BZ# 2095481)\n\n* Enforce failed and report the violation after modified memory value in\nlimitrange policy (BZ# 2100036)\n\n* Creating an application fails with \"This application has no subscription\nmatch selector (spec.selector.matchExpressions)\" (BZ# 2101577)\n\n* Inconsistent cluster resource statuses between \"All Subscription\"\ntopology and individual topologies (BZ# 2102273)\n\n* managed cluster is in \"unknown\" state for 120 mins after OADP restore\n\n* RHACM 2.5.2 images (BZ# 2104553)\n\n* Subscription UI does not allow binding to label with empty value (BZ#\n2104961)\n\n* Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ#\n2106069)\n\n* Region information is not available for Azure cloud in managedcluster CR\n(BZ# 2107134)\n\n* cluster uninstall log points to incorrect container name (BZ# 2107359)\n\n* ACM shows wrong path for Argo CD applicationset git generator (BZ#\n2107885)\n\n* Single node checkbox not visible for 4.11 images (BZ# 2109134)\n\n* Unable to deploy hypershift cluster when enabling\nvalidate-cluster-security (BZ# 2109544)\n\n* Deletion of Application (including app related resources) from the\nconsole fails to delete PlacementRule for the application (BZ# 20110026)\n\n* After the creation by a policy of job or deployment (in case the object\nis missing)ACM is trying to add new containers instead of updating (BZ#\n2117728)\n\n* pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)\n\n* ArgoCD and AppSet Applications do not deploy to local-cluster (BZ#\n2124707)\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation, which will be updated shortly for this release, for\nimportant\ninstructions about installing this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters\n2082254 - OCP 4.11 - Install fails because of: pods \"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate against any security context constraint\n2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec\n2086883 - Yaml editor for creating vSphere cluster moves to next line after typing\n2090311 - Submariner addon status doesn\u0027t track all deployment failures\n2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret\n2095481 - After switching to ACM 2.5 the managed clusters log \"unable to create ClusterClaim\" errors\n2100036 - Enforce failed and report the violation after modified memory value in limitrange policy\n2101577 - Creating an application fails with \"This application has no subscription match selector (spec.selector.matchExpressions)\"\n2102273 - Inconsistent cluster resource statuses between \"All Subscription\" topology and individual topologies\n2103653 - managed cluster is in \"unknown\" state for 120 mins after OADP restore\n2104553 - RHACM 2.5.2 images\n2104961 - Subscription UI does not allow binding to label with empty value\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD\n2107134 - Region information is not available for Azure cloud in managedcluster CR\n2107359 - cluster uninstall log points to incorrect container name\n2107885 - ACM shows wrong path for Argo CD applicationset git generator\n2109134 - Single node checkbox not visible for 4.11 images\n2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application\n2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating\n2122292 - pods in CrashLoopBackoff on 3.11 managed cluster\n2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Description:\n\nOpenShift Virtualization is Red Hat\u0027s virtualization solution designed for\nRed Hat OpenShift Container Platform. This advisory contains the following\nOpenShift Virtualization 4.12.0 images:\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* golang: net/http: improper sanitization of Transfer-Encoding header\n(CVE-2022-1705)\n\n* golang: go/parser: stack exhaustion in all Parse* functions\n(CVE-2022-1962)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)\n\n* golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/http/httputil: NewSingleHostReverseProxy - omit\nX-Forwarded-For not working (CVE-2022-32148)\n\n* golang: crypto/tls: session tickets lack random ticket_age_add\n(CVE-2022-30629)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nRHEL-8-CNV-4.12\n\n=============\nbridge-marker-container-v4.12.0-24\ncluster-network-addons-operator-container-v4.12.0-24\ncnv-containernetworking-plugins-container-v4.12.0-24\ncnv-must-gather-container-v4.12.0-58\nhco-bundle-registry-container-v4.12.0-769\nhostpath-csi-driver-container-v4.12.0-30\nhostpath-provisioner-container-v4.12.0-30\nhostpath-provisioner-operator-container-v4.12.0-31\nhyperconverged-cluster-operator-container-v4.12.0-96\nhyperconverged-cluster-webhook-container-v4.12.0-96\nkubemacpool-container-v4.12.0-24\nkubevirt-console-plugin-container-v4.12.0-182\nkubevirt-ssp-operator-container-v4.12.0-64\nkubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55\nkubevirt-tekton-tasks-copy-template-container-v4.12.0-55\nkubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55\nkubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55\nkubevirt-tekton-tasks-operator-container-v4.12.0-40\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55\nkubevirt-template-validator-container-v4.12.0-32\nlibguestfs-tools-container-v4.12.0-255\novs-cni-marker-container-v4.12.0-24\novs-cni-plugin-container-v4.12.0-24\nvirt-api-container-v4.12.0-255\nvirt-artifacts-server-container-v4.12.0-255\nvirt-cdi-apiserver-container-v4.12.0-72\nvirt-cdi-cloner-container-v4.12.0-72\nvirt-cdi-controller-container-v4.12.0-72\nvirt-cdi-importer-container-v4.12.0-72\nvirt-cdi-operator-container-v4.12.0-72\nvirt-cdi-uploadproxy-container-v4.12.0-71\nvirt-cdi-uploadserver-container-v4.12.0-72\nvirt-controller-container-v4.12.0-255\nvirt-exportproxy-container-v4.12.0-255\nvirt-exportserver-container-v4.12.0-255\nvirt-handler-container-v4.12.0-255\nvirt-launcher-container-v4.12.0-255\nvirt-operator-container-v4.12.0-255\nvirtio-win-container-v4.12.0-10\nvm-network-latency-checkup-container-v4.12.0-89\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1719190 - Unable to cancel live-migration if virt-launcher pod in pending state\n2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040377 - Unable to delete failed VMIM after VM deleted\n2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed\n2052556 - Metric \"kubevirt_num_virt_handlers_by_node_running_virt_launcher\" reporting incorrect value\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2060499 - [RFE] Cannot add additional service (or other objects) to VM template\n2069098 - Large scale |VMs migration is slow due to low migration parallelism\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2071491 - Storage Throughput metrics are incorrect in Overview\n2072797 - Metrics in Virtualization -\u003e Overview period is not clear or configurable\n2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers\n2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode\n2086551 - Min CPU feature found in labels\n2087724 - Default template show no boot source even there are auto-upload boot sources\n2088129 - [SSP] webhook does not comply with restricted security context\n2088464 - [CDI] cdi-deployment does not comply with restricted security context\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2089744 - HCO should label its control plane namespace to admit pods at privileged security level\n2089751 - 4.12.0 containers\n2089804 - 4.12.0 rpms\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer\n2093771 - The disk source should be PVC if the template has no auto-update boot source\n2093996 - kubectl get vmi API should always return primary interface if exist\n2094202 - Cloud-init username field should have hint\n2096285 - KubeVirt CR API documentation is missing docs for many fields\n2096780 - [RFE] Add ssh-key and sysprep to template scripts tab\n2097436 - Online disk expansion ignores filesystem overhead change\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2099556 - [RFE] Add option to enable RDP service for windows vm\n2099573 - [RFE] Improve template\u0027s message about not editable\n2099923 - [RFE] Merge \"SSH access\" and \"SSH command\" into one\n2100290 - Error is not dismissed on catalog review page\n2100436 - VM list filtering ignores VMs in error-states\n2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2100629 - Update nested support KBASE article\n2100679 - The number of hardware devices is not correct in vm overview tab\n2100682 - All hardware devices get deleted while just delete one\n2100684 - Workload profile are not editable during creation and after creation\n2101144 - VM filter has two \"Other\" checkboxes which are triggered together\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101167 - Edit buttons clickable area is too large. \n2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state\n2101390 - Easy to miss the \"tick\" when adding GPU device to vm via UI\n2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2101423 - wrong user name on using ignition\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101445 - \"Pending changes - Boot Order\"\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101499 - Cannot add NIC to VM template as non-priv user\n2101501 - NAME parameter in VM template has no effect. \n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101667 - VMI view is not aligned with vm and tempates\n2101681 - All templates are labeling \"source available\" in template list page\n2102074 - VM Creation time on VM Overview Details card lacks string\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102132 - align the utilization card of single VM overview with the design\n2102138 - Should the word \"new\" be removed from \"Create new VirtualMachine from catalog\"?\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102475 - Template \u0027vm-template-example\u0027 should be filtered by \u0027Fedora\u0027 rather than \u0027Other\u0027\n2102561 - sysprep-info should link to downstream doc\n2102737 - Clone a VM should lead to vm overview tab\n2102740 - \"Save\" button on vm clone modal should be \"Clone\"\n2103806 - \"404: Not Found\" appears shortly by clicking the PVC link on vm disk tab\n2103807 - PVC is not named by VM name while creating vm quickly\n2103817 - Workload profile values in vm details should align with template\u0027s value\n2103844 - VM nic model is empty\n2104331 - VM list page scroll up automatically\n2104402 - VM create button is not enabled while adding multiple environment disks\n2104422 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2104424 - Enable descheduler or hide it on template\u0027s scheduling tab\n2104479 - [4.12] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2104480 - Alerts in VM overview tab disappeared after a few seconds\n2104785 - \"Add disk\" and \"Disks\" are on the same line\n2104859 - [RFE] Add \"Copy SSH command\" to VM action list\n2105257 - Can\u0027t set log verbosity level for virt-operator pod\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106963 - Cannot add configmap for windows VM\n2107279 - VM Template\u0027s bootable disk can be marked as bootable\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108339 - datasource does not provide timestamp when updated\n2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2109818 - Upstream metrics documentation is not detailed enough\n2109975 - DataVolume fails to import \"cirros-container-disk-demo\" image\n2110256 - Storage -\u003e PVC -\u003e upload data, does not support source reference\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2111240 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111328 - kubevirt plugin console crashed after visit vmi page\n2111378 - VM SSH command generated by UI points at api VIP\n2111744 - Cloned template should not label `app.kubernetes.io/name: common-templates`\n2111794 - the virtlogd process is taking too much RAM! (17468Ki \u003e 17Mi)\n2112900 - button style are different\n2114516 - Nothing happens after clicking on Fedora cloud image list link\n2114636 - The style of displayed items are not unified on VM tabs\n2114683 - VM overview tab is crashed just after the vm is created\n2115257 - Need to Change system-product-name to \"OpenShift Virtualization\" in CNV-4.12\n2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items\n2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates\n2116225 - The filter keyword of the related operator \u0027Openshift Data Foundation\u0027 is \u0027OCS\u0027 rather than \u0027ODF\u0027\n2116644 - Importer pod is failing to start with error \"MountVolume.SetUp failed for volume \"cdi-proxy-cert-vol\" : configmap \"custom-ca\" not found\"\n2117549 - Cannot edit cloud-init data after add ssh key\n2117803 - Cannot edit ssh even vm is stopped\n2117813 - Improve descriptive text of VM details while VM is off\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n2118257 - outdated doc link tolerations modal\n2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format\n2119069 - Unable to start windows VMs on PSI setups\n2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2119309 - readinessProbe in VM stays on failed\n2119615 - Change the disk size causes the unit changed\n2120907 - Cannot filter disks by label\n2121320 - Negative values in migration metrics\n2122236 - Failing to delete HCO with SSP sticking around\n2122990 - VMExport should check APIGroup\n2124147 - \"ReadOnlyMany\" should not be added to supported values in memory dump\n2124307 - Ui crash/stuck on loading when trying to detach disk on a VM\n2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it\n2124555 - View documentation link on MigrationPolicies page des not work\n2124557 - MigrationPolicy description is not displayed on Details page\n2124558 - Non-privileged user can start MigrationPolicy creation\n2124565 - Deleted DataSource reappears in list\n2124572 - First annotation can not be added to DataSource\n2124582 - Filtering VMs by OS does not work\n2124594 - Docker URL validation is inconsistent over application\n2124597 - Wrong case in Create DataSource menu\n2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile\n2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state\n2127787 - Expose the PVC source of the dataSource on UI\n2127843 - UI crashed by selecting \"Live migration network\"\n2127931 - Change default time range on Virtualization -\u003e Overview -\u003e Monitoring dashboard to 30 minutes\n2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer\n2128002 - Error after VM template deletion\n2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128948 - Cannot create DataSource from default YAML\n2128949 - Cannot create MigrationPolicy from example YAML\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129234 - Service is not deleted along with the VM when the VM is created from a template with service\n2129301 - Cloud-init network data don\u0027t wipe out on uncheck checkbox \u0027Add network data\u0027\n2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook\n2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV\n2130588 - crypto-policy : Common Ciphers support by apiserver and hco\n2130695 - crypto-policy : Logging Improvement and publish the source of ciphers\n2130909 - Non-privileged user can start DataSource creation\n2131157 - KV data transfer rate chart in VM Metrics tab is not displayed\n2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough\n2131674 - Bump virtlogd memory requirement to 20Mi\n2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11\n2132682 - Default YAML entity name convention. \n2132721 - Delete dialogs\n2132744 - Description text is missing in Live Migrations section\n2132746 - Background is broken in Virtualization Monitoring page\n2132783 - VM can not be created from Template with edited boot source\n2132793 - Edited Template BSR is not saved\n2132932 - Typo in PVC size units menu\n2133540 - [pod security violation audit] Audit violation in \"cni-plugins\" container should be fixed\n2133541 - [pod security violation audit] Audit violation in \"bridge-marker\" container should be fixed\n2133542 - [pod security violation audit] Audit violation in \"manager\" container should be fixed\n2133543 - [pod security violation audit] Audit violation in \"kube-rbac-proxy\" container should be fixed\n2133655 - [pod security violation audit] Audit violation in \"cdi-operator\" container should be fixed\n2133656 - [4.12][pod security violation audit] Audit violation in \"hostpath-provisioner-operator\" container should be fixed\n2133659 - [pod security violation audit] Audit violation in \"cdi-controller\" container should be fixed\n2133660 - [pod security violation audit] Audit violation in \"cdi-source-update-poller\" container should be fixed\n2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod\n2134672 - [e2e] add data-test-id for catalog -\u003e storage section\n2134825 - Authorization for expand-spec endpoint missing\n2135805 - Windows 2022 template is missing vTPM and UEFI params in spec\n2136051 - Name jumping when trying to create a VM with source from catalog\n2136425 - Windows 11 is detected as Windows 10\n2136534 - Not possible to specify a TTL on VMExports\n2137123 - VMExport: export pod is not PSA complaint\n2137241 - Checkbox about delete vm disks is not loaded while deleting VM\n2137243 - registery input add docker prefix twice\n2137349 - \"Manage source\" action infinitely loading on DataImportCron details page\n2137591 - Inconsistent dialog headings/titles\n2137731 - Link of VM status in overview is not working\n2137733 - No link for VMs in error status in \"VirtualMachine statuses\" card\n2137736 - The column name \"MigrationPolicy name\" can just be \"Name\"\n2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly\n2138112 - Unsupported S3 endpoint option in Add disk modal\n2138119 - \"Customize VirtualMachine\" flow is not user-friendly because settings are split into 2 modals\n2138199 - Win11 and Win22 templates are not filtered properly by Template provider\n2138653 - Saving Template prameters reloads the page\n2138657 - Setting DATA_SOURCE_* Template parameters makes VM creation fail\n2138664 - VM that was created with SSH key fails to start\n2139257 - Cannot add disk via \"Using an existing PVC\"\n2139260 - Clone button is disabled while VM is running\n2139293 - Non-admin user cannot load VM list page\n2139296 - Non-admin cannot load MigrationPolicies page\n2139299 - No auto-generated VM name while creating VM by non-admin user\n2139306 - Non-admin cannot create VM via customize mode\n2139479 - virtualization overview crashes for non-priv user\n2139574 - VM name gets \"emptyname\" if click the create button quickly\n2139651 - non-priv user can click create when have no permissions\n2139687 - catalog shows template list for non-priv users\n2139738 - [4.12]Can\u0027t restore cloned VM\n2139820 - non-priv user cant reach vm details\n2140117 - Provide upgrade path from 4.11.1-\u003e4.12.0\n2140521 - Click the breadcrumb list about \"VirtualMachines\" goes to undefined project\n2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user\n2140627 - Not able to select storageClass if there is no default storageclass defined\n2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user\n2140808 - Hyperv feature set to \"enabled: false\" prevents scheduling\n2140977 - Alerts number is not correct on Virtualization overview\n2140982 - The base template of cloned template is \"Not available\"\n2140998 - Incorrect information shows in overview page per namespace\n2141089 - Unable to upload boot images. \n2141302 - Unhealthy states alerts and state metrics are missing\n2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations\n2141494 - \"Start in pause mode\" option is not available while creating the VM\n2141654 - warning log appearing on VMs: found no SR-IOV networks\n2141711 - Node column selector is redundant for non-priv user\n2142468 - VM action \"Stop\" should not be disabled when VM in pause state\n2142470 - Delete a VM or template from all projects leads to 404 error\n2142511 - Enhance alerts card in overview\n2142647 - Error after MigrationPolicy deletion\n2142891 - VM latency checkup: Failed to create the checkup\u0027s Job\n2142929 - Permission denied when try get instancestypes\n2143268 - Topolvm storageProfile missing accessModes and volumeMode\n2143498 - Could not load template while creating VM from catalog\n2143964 - Could not load template while creating VM from catalog\n2144580 - \"?\" icon is too big in VM Template Disk tab\n2144828 - \"?\" icon is too big in VM Template Disk tab\n2144839 - Alerts number is not correct on Virtualization overview\n2153849 - After upgrade to 4.11.1-\u003e4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten\n2155757 - Incorrect upstream-version label \"v1.6.0-unstable-410-g09ea881c\" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32208"
},
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "167661"
},
{
"db": "PACKETSTORM",
"id": "167607"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "170741"
}
],
"trust": 1.71
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-32208",
"trust": 1.9
},
{
"db": "HACKERONE",
"id": "1590071",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "167661",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168289",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168503",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167607",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168158",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168284",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168275",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168174",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168347",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168301",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-424135",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169443",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170741",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "167661"
},
{
"db": "PACKETSTORM",
"id": "167607"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"id": "VAR-202206-1961",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T19:33:18.254000Z",
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220915-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/28"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.1,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.1,
"url": "https://hackerone.com/reports/1590071"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6159"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/sandboxed_containers/sandboxed-containers-release-notes.html"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7058"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/sandboxed_containers/upgrade-sandboxed-containers.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2832"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2832"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5499-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.74.0-1.3ubuntu2.3"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5495-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.12"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.81.0-1ubuntu1.3"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.19"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6507"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-36067"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6182"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-34903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6560"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1798"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "167661"
},
{
"db": "PACKETSTORM",
"id": "167607"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "167661"
},
{
"db": "PACKETSTORM",
"id": "167607"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-07-07T00:00:00",
"db": "VULHUB",
"id": "VHN-424135"
},
{
"date": "2022-08-25T15:25:12",
"db": "PACKETSTORM",
"id": "168158"
},
{
"date": "2022-10-20T14:21:57",
"db": "PACKETSTORM",
"id": "169443"
},
{
"date": "2022-07-01T14:59:23",
"db": "PACKETSTORM",
"id": "167661"
},
{
"date": "2022-06-28T15:26:16",
"db": "PACKETSTORM",
"id": "167607"
},
{
"date": "2022-09-14T15:08:07",
"db": "PACKETSTORM",
"id": "168378"
},
{
"date": "2022-09-07T17:09:04",
"db": "PACKETSTORM",
"id": "168289"
},
{
"date": "2022-09-26T15:37:32",
"db": "PACKETSTORM",
"id": "168503"
},
{
"date": "2023-01-26T15:29:09",
"db": "PACKETSTORM",
"id": "170741"
},
{
"date": "2022-07-07T13:15:08.467000",
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-424135"
},
{
"date": "2024-03-27T15:00:41.657000",
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "167661"
}
],
"trust": 0.1
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2022-6159-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "168158"
}
],
"trust": 0.1
}
}
VAR-202004-2199
Vulnerability from variot - Updated: 2024-07-23 19:20In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery Contains a cross-site scripting vulnerability.Information may be obtained and information may be tampered with. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. Summary:
An update for the pki-core:10.6 and pki-deps:10.6 modules is now available for Red Hat Enterprise Linux 8. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
The Public Key Infrastructure (PKI) Core contains fundamental packages required by Red Hat Certificate System.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.3 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
1376706 - restore SerialNumber tag in caManualRenewal xml 1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests 1406505 - KRA ECC installation failed with shared tomcat 1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute 1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip 1666907 - CC: Enable AIA OCSP cert checking for entire cert chain 1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute 1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute 1695901 - CVE-2019-10179 pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA's DRM agent page in authorize recovery tab 1701972 - CVE-2019-11358 jquery: Prototype pollution in object's prototype leading to denial of service, remote code execution, or property injection 1706521 - CA - SubjectAltNameExtInput does not display text fields to the enrollment page 1710171 - CVE-2019-10146 pki-core: Reflected XSS in 'path length' constraint field in CA's Agent page 1721684 - Rebase pki-servlet-engine to 9.0.30 1724433 - caTransportCert.cfg contains MD2/MD5withRSA as signingAlgsAllowed. 1732565 - CVE-2019-10221 pki-core: Reflected XSS in getcookies?url= endpoint in CA 1732981 - When nuxwdog is enabled pkidaemon status shows instances as stopped. 1777579 - CVE-2020-1721 pki-core: KRA vulnerable to reflected XSS via the getPk12 page 1805541 - [RFE] CA Certificate Transparency with Embedded Signed Certificate Time stamp 1817247 - Upgrade to 10.8.3 breaks PKI Tomcat Server 1821851 - [RFE] Provide SSLEngine via JSSProvider for use with PKI 1822246 - JSS - NativeProxy never calls releaseNativeResources - Memory Leak 1824939 - JSS: add RSA PSS support - RHEL 8.3 1824948 - add RSA PSS support - RHEL 8.3 1825998 - CertificatePoliciesExtDefault MAX_NUM_POLICIES hardcoded limit 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1842734 - CVE-2019-10179 pki-core: pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA's DRM agent page in authorize recovery tab [rhel-8] 1842736 - CVE-2019-10146 pki-core: Reflected Cross-Site Scripting in 'path length' constraint field in CA's Agent page [rhel-8] 1843537 - Able to Perform PKI CLI operations like cert request and approval without nssdb password 1845447 - pkispawn fails in FIPS mode: AJP connector has secretRequired="true" but no secret 1850004 - CVE-2020-11023 jquery: Passing HTML containing elements to manipulation methods could result in untrusted code execution 1854043 - /usr/bin/PrettyPrintCert is failing with a ClassNotFoundException 1854959 - ca-profile-add with Netscape extensions nsCertSSLClient and nsCertEmail in the profile gets stuck in processing 1855273 - CVE-2020-15720 pki: Dogtag's python client does not validate certificates 1855319 - Not able to launch pkiconsole 1856368 - kra-key-generate request is failing 1857933 - CA Installation is failing with ncipher v12.30 HSM 1861911 - pki cli ca-cert-request-approve hangs over crmf request from client-cert-request 1869893 - Common certificates are missing in CS.cfg on shared PKI instance 1871064 - replica install failing during pki-ca component configuration 1873235 - pki ca-user-cert-add with secure port failed with 'SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT'
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: RHV Manager (ovirt-engine) [ovirt-4.5.2] bug fix and security update Advisory ID: RHSA-2022:6393-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2022:6393 Issue date: 2022-09-08 CVE Names: CVE-2020-11022 CVE-2020-11023 CVE-2021-22096 CVE-2021-23358 CVE-2022-2806 CVE-2022-31129 ==================================================================== 1. Summary:
Updated ovirt-engine packages that fix several bugs and add various enhancements are now available.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch
- Description:
The ovirt-engine package provides the Red Hat Virtualization Manager, a centralized management platform that allows system administrators to view and manage virtual machines. The Manager provides a comprehensive range of features including search capabilities, resource management, live migrations, and virtual infrastructure provisioning.
Security Fix(es):
-
nodejs-underscore: Arbitrary code execution via the template function (CVE-2021-23358)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method (CVE-2020-11022)
-
jquery: Untrusted code execution via tag in HTML passed to DOM manipulation methods (CVE-2020-11023)
-
ovirt-log-collector: RHVM admin password is logged unfiltered (CVE-2022-2806)
-
springframework: malicious input leads to insertion of additional log entries (CVE-2021-22096)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Previously, running engine-setup did not always renew OVN certificates close to expiration or expired. With this release, OVN certificates are always renewed by engine-setup when needed. (BZ#2097558)
-
Previously, the Manager issued warnings of approaching certificate expiration before engine-setup could update certificates. In this release expiration warnings and certificate update periods are aligned, and certificates are updated as soon as expiration warnings occur. (BZ#2097725)
-
With this release, OVA export or import work on hosts with a non-standard SSH port. (BZ#2104939)
-
With this release, the certificate validity test is compatible with RHEL 8 and RHEL 7 based hypervisors. (BZ#2107250)
-
RHV 4.4 SP1 and later are only supported on RHEL 8.6, customers cannot use RHEL 8.7 or later, and must stay with RHEL 8.6 EUS. (BZ#2108985)
-
Previously, importing templates from the Administration Portal did not work. With this release, importing templates from the Administration Portal is possible. (BZ#2109923)
-
ovirt-provider-ovn certificate expiration is checked along with other RHV certificates. If ovirt-provider-ovn is about to expire or already expired, a warning or alert is raised in the audit log. To renew the ovirt-provider-ovn certificate, administators must run engine-setup. If your ovirt-provider-ovn certificate expires on a previous RHV version, upgrade to RHV 4.4 SP1 batch 2 or later, and ovirt-provider-ovn certificate will be renewed automatically in the engine-setup. (BZ#2097560)
-
Previously, when importing a virtual machine with manual CPU pinning, the manual pinning string was cleared, but the CPU pinning policy was not set to NONE. As a result, importing failed. In this release, the CPU pinning policy is set to NONE if the CPU pinning string is cleared, and importing succeeds. (BZ#2104115)
-
Previously, the Manager could start a virtual machine with a Resize and Pin NUMA policy on a host without an equal number of physical sockets to NUMA nodes. As a result, wrong pinning was assigned to the policy. With this release, the Manager does not allow the virtual machine to be scheduled on such a virtual machine, and the pinning is correct based on the algorithm. (BZ#1955388)
-
Rebase package(s) to version: 4.4.7. Highlights, important fixes, or notable enhancements: fixed BZ#2081676 (BZ#2104831)
-
In this release, rhv-log-collector-analyzer provides detailed output for each problematic image, including disk names, associated virtual machine, the host running the virtual machine, snapshots, and current SPM. The detailed view is now the default. The compact option can be set by using the --compact switch in the command line. (BZ#2097536)
-
UnboundID LDAP SDK has been rebased on upstream version 6.0.4. See https://github.com/pingidentity/ldapsdk/releases for changes since version 4.0.14 (BZ#2092478)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/2974891
-
1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function 1955388 - Auto Pinning Policy only pins some of the vCPUs on a single NUMA host 1974974 - Not possible to determine migration policy from the API, even though documentation reports that it can be done. 2034584 - CVE-2021-22096 springframework: malicious input leads to insertion of additional log entries 2080005 - CVE-2022-2806 ovirt-log-collector: RHVM admin password is logged unfiltered 2092478 - Upgrade unboundid-ldapsdk to 6.0.4 2094577 - rhv-image-discrepancies must ignore small disks created by OCP 2097536 - [RFE] Add disk name and uuid to problems output 2097558 - Renew ovirt-provider-ovn.cer certificates during engine-setup 2097560 - Warning when ovsdb-server certificates are about to expire(OVN certificate) 2097725 - Certificate Warn period and automatic renewal via engine-setup do not match 2104115 - RHV 4.5 cannot import VMs with cpu pinning 2104831 - Upgrade ovirt-log-collector to 4.4.7 2104939 - Export OVA when using host with port other than 22 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2107250 - Upgrade of the host failed as the RHV 4.3 hypervisor is based on RHEL 7 with openssl 1.0.z, but RHV Manager 4.4 uses the openssl 1.1.z syntax 2107267 - ovirt-log-collector doesn't generate database dump 2108985 - RHV 4.4 SP1 EUS requires RHEL 8.6 EUS (RHEL 8.7+ releases are not supported on RHV 4.4 SP1 EUS) 2109923 - Error when importing templates in Admin portal
-
Package List:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:
Source: ovirt-engine-4.5.2.4-0.1.el8ev.src.rpm ovirt-engine-dwh-4.5.4-1.el8ev.src.rpm ovirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.3.5-1.el8ev.src.rpm ovirt-log-collector-4.4.7-2.el8ev.src.rpm ovirt-web-ui-1.9.1-1.el8ev.src.rpm rhv-log-collector-analyzer-1.0.15-1.el8ev.src.rpm unboundid-ldapsdk-6.0.4-1.el8ev.src.rpm vdsm-jsonrpc-java-1.7.2-1.el8ev.src.rpm
noarch: ovirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-backend-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-dbscripts-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-dwh-4.5.4-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.5.4-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.5.4-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-setup-1.4.6-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-restapi-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-base-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-tools-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-tools-backup-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.3.5-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-log-collector-4.4.7-2.el8ev.noarch.rpm ovirt-web-ui-1.9.1-1.el8ev.noarch.rpm python3-ovirt-engine-lib-4.5.2.4-0.1.el8ev.noarch.rpm rhv-log-collector-analyzer-1.0.15-1.el8ev.noarch.rpm rhvm-4.5.2.4-0.1.el8ev.noarch.rpm unboundid-ldapsdk-6.0.4-1.el8ev.noarch.rpm unboundid-ldapsdk-javadoc-6.0.4-1.el8ev.noarch.rpm vdsm-jsonrpc-java-1.7.2-1.el8ev.noarch.rpm vdsm-jsonrpc-java-javadoc-1.7.2-1.el8ev.noarch.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-11022 https://access.redhat.com/security/cve/CVE-2020-11023 https://access.redhat.com/security/cve/CVE-2021-22096 https://access.redhat.com/security/cve/CVE-2021-23358 https://access.redhat.com/security/cve/CVE-2022-2806 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYxnqRtzjgjWX9erEAQiQOw//XOS172gkbNeuoMSW1IYiEpJG4zQIvT2J VvyizOMlQzpe49Bkopu1zj/e8yM1eXNIg1elPzA3280z7ruNb4fkeoXT7vM5mB/0 jRAr1ja9ZHnZmEW60X3WVhEBjEXCeOv5CWBgqzdQWSB7RpPqfMP7/4kHGFnCPZxu V/n+Z9YKoDxeiW19tuTdU5E5cFySVV8JZAlfXlrR1dz815Ugsm2AMk6uPwjQ2+C7 Uz3zLQLjRjxFk+qSph8NYbOZGnUkypWQG5KXPMyk/Cg3jewjMkjAhzgcTJAdolRC q3p9kD5KdWRe+3xzjy6B4IsSSqvEyHphwrRv8wgk0vIAawfgi76+jL7n/C07rdpA Qg6zlDxmHDrZPC42dsW6dXJ1QefRQE5EzFFJcoycqvWdlRfXX6D1RZc5knSQb2iI 3iSh+hVwxY9pzNZVMlwtDHhw8dqvgw7JimToy8vOldgK0MdndwtVmKsKsRzu7HyL PQSvcN5lSv1X5FR2tnx9LMQXX1qn0P1d/8gTiRFm8Oabjx2r8I0/HNgnJpTSVSBO DXjKFDmwpiT+6tupM39ZbWek2hh+PoyMZJb/d6/YTND6VNlzUypq+DFtLILEaM8Z OjWz0YAL8/ihvhq0vSdFSMFcYKSWAOXA+6pSqe7N7WtB9hl0r7sLUaRSRHti1Ime uF/GLDTKkPw=8zTJ -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. Description:
Red Hat JBoss Enterprise Application Platform 7 is a platform for Java applications based on the WildFly application runtime. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. JIRA issues fixed (https://issues.jboss.org/):
JBEAP-23864 - (7.4.z) Upgrade xmlsec from 2.1.7.redhat-00001 to 2.2.3.redhat-00001 JBEAP-23865 - GSS Upgrade Apache CXF from 3.3.13.redhat-00001 to 3.4.10.redhat-00001 JBEAP-23866 - (7.4.z) Upgrade wss4j from 2.2.7.redhat-00001 to 2.3.3.redhat-00001 JBEAP-23926 - Tracker bug for the EAP 7.4.9 release for RHEL-7 JBEAP-24055 - (7.4.z) Upgrade HAL from 3.3.15.Final-redhat-00001 to 3.3.16.Final-redhat-00001 JBEAP-24081 - (7.4.z) Upgrade Elytron from 1.15.14.Final-redhat-00001 to 1.15.15.Final-redhat-00001 JBEAP-24095 - (7.4.z) Upgrade elytron-web from 1.9.2.Final-redhat-00001 to 1.9.3.Final-redhat-00001 JBEAP-24100 - GSS Upgrade Undertow from 2.2.20.SP1-redhat-00001 to 2.2.22.SP3-redhat-00001 JBEAP-24127 - (7.4.z) UNDERTOW-2123 - Update AsyncContextImpl.dispatch to use proper value JBEAP-24128 - (7.4.z) Upgrade Hibernate Search from 5.10.7.Final-redhat-00001 to 5.10.13.Final-redhat-00001 JBEAP-24132 - GSS Upgrade Ironjacamar from 1.5.3.SP2-redhat-00001 to 1.5.10.Final-redhat-00001 JBEAP-24147 - (7.4.z) Upgrade jboss-ejb-client from 4.0.45.Final-redhat-00001 to 4.0.49.Final-redhat-00001 JBEAP-24167 - (7.4.z) Upgrade WildFly Core from 15.0.19.Final-redhat-00001 to 15.0.21.Final-redhat-00002 JBEAP-24191 - GSS Upgrade remoting from 5.0.26.SP1-redhat-00001 to 5.0.27.Final-redhat-00001 JBEAP-24195 - GSS Upgrade JSF API from 3.0.0.SP06-redhat-00001 to 3.0.0.SP07-redhat-00001 JBEAP-24207 - (7.4.z) Upgrade Soteria from 1.0.1.redhat-00002 to 1.0.1.redhat-00003 JBEAP-24248 - (7.4.z) ELY-2492 - Upgrade sshd-common in Elytron from 2.7.0 to 2.9.2 JBEAP-24426 - (7.4.z) Upgrade Elytron from 1.15.15.Final-redhat-00001 to 1.15.16.Final-redhat-00001 JBEAP-24427 - (7.4.z) Upgrade WildFly Core from 15.0.21.Final-redhat-00002 to 15.0.22.Final-redhat-00001
7
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202004-2199",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "communications eagle application processor",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.1.0"
},
{
"model": "banking platform",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.10.0"
},
{
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.3.1"
},
{
"model": "communications session report manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.1"
},
{
"model": "health sciences inform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "6.3.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"model": "max data",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "hyperion financial reporting",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.1.2.4"
},
{
"model": "storagetek tape analytics sw tool",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "2.3.1"
},
{
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.0.2"
},
{
"model": "communications session route manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.1"
},
{
"model": "oncommand system manager",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.1.3"
},
{
"model": "communications operations monitor",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "4.1"
},
{
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18c"
},
{
"model": "communications element manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.1"
},
{
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.6"
},
{
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19c"
},
{
"model": "financial services regulatory reporting for de nederlandsche bank",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.4"
},
{
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "18.8.0"
},
{
"model": "oss support tools",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "2.12.41"
},
{
"model": "snap creator framework",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications interactive session recorder",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "6.4"
},
{
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "17.12.7"
},
{
"model": "webcenter sites",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "7.70"
},
{
"model": "jquery",
"scope": "lt",
"trust": 1.0,
"vendor": "jquery",
"version": "3.5.0"
},
{
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.12.0"
},
{
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.2.0.4"
},
{
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "18.8.9"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.3.0.0"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.12.4"
},
{
"model": "communications session route manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.1"
},
{
"model": "communications eagle application processor",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.4.0"
},
{
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.2"
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "7.0"
},
{
"model": "communications element manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.1"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.2.11"
},
{
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.0"
},
{
"model": "banking enterprise collections",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.8.0"
},
{
"model": "communications services gatekeeper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.0"
},
{
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.3.2"
},
{
"model": "banking platform",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.4.0"
},
{
"model": "log correlation engine",
"scope": "lt",
"trust": 1.0,
"vendor": "tenable",
"version": "6.0.9"
},
{
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services revenue management and billing analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "2.7"
},
{
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "17.12.0"
},
{
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications session report manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.0"
},
{
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.0.1"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1.1.0.0"
},
{
"model": "peoplesoft enterprise human capital management resources",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "9.2"
},
{
"model": "communications analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.1"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications session route manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.0"
},
{
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"model": "snapcenter server",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications operations monitor",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.4"
},
{
"model": "banking enterprise collections",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.7.0"
},
{
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.0"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "financial services revenue management and billing analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "2.8"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications session report manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.1"
},
{
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.14"
},
{
"model": "communications interactive session recorder",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "6.1"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"model": "business intelligence",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "5.9.0.0.0"
},
{
"model": "jd edwards enterpriseone orchestrator",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "9.2.5.0"
},
{
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.4.0"
},
{
"model": "communications operations monitor",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "4.3"
},
{
"model": "storagetek acsls",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.5.1"
},
{
"model": "oncommand system manager",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.0"
},
{
"model": "jd edwards enterpriseone tools",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "9.2.5.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "31"
},
{
"model": "siebel mobile",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "20.12"
},
{
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.2.1"
},
{
"model": "webcenter sites",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"model": "jquery",
"scope": "gte",
"trust": 1.0,
"vendor": "jquery",
"version": "1.0.3"
},
{
"model": "communications element manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.0"
},
{
"model": "application testing suite",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.3.0.1"
},
{
"model": "oncommand insight",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "application express",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "20.2"
},
{
"model": "hitachi ops center common services",
"scope": null,
"trust": 0.8,
"vendor": "\u65e5\u7acb",
"version": null
},
{
"model": "jquery",
"scope": null,
"trust": 0.8,
"vendor": "jquery",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"configurations": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/configurations#",
"children": {
"@container": "@list"
},
"cpe_match": {
"@container": "@list"
},
"data": {
"@container": "@list"
},
"nodes": {
"@container": "@list"
}
},
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:jquery:jquery:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "3.5.0",
"versionStartIncluding": "1.0.3",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:drupal:drupal:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "7.70",
"versionStartIncluding": "7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:drupal:drupal:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.7.14",
"versionStartIncluding": "8.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:drupal:drupal:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "8.8.6",
"versionStartIncluding": "8.8.0",
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.1.3.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:hyperion_financial_reporting:11.1.2.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:webcenter_sites:12.2.1.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:application_testing_suite:13.3.0.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_operations_monitor:3.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:webcenter_sites:12.2.1.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:14.1.1.0.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_interactive_session_recorder:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "6.4",
"versionStartIncluding": "6.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_element_manager:8.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_element_manager:8.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_element_manager:8.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:application_express:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "20.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:rest_data_services:12.2.0.1:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:rest_data_services:12.1.0.2:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:rest_data_services:11.2.0.4:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:rest_data_services:18c:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:rest_data_services:19c:*:*:*:-:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_services_gatekeeper:7.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:storagetek_tape_analytics_sw_tool:2.3.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_session_report_manager:8.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_session_report_manager:8.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_session_report_manager:8.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_session_route_manager:8.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_session_route_manager:8.2.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_session_route_manager:8.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "16.2.11",
"versionStartIncluding": "16.2",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "17.12.7",
"versionStartIncluding": "17.12.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:siebel_mobile:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "20.12",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_human_capital_management_resources:9.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_regulatory_reporting_for_de_nederlandsche_bank:8.0.4:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.2.5.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_enterprise_collections:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "2.8.0",
"versionStartIncluding": "2.7.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_orchestrator:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "9.2.5.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:banking_platform:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "2.10.0",
"versionStartIncluding": "2.4.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "19.12.4",
"versionStartIncluding": "19.12.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "18.8.9",
"versionStartIncluding": "18.8.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_operations_monitor:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "4.3",
"versionStartIncluding": "4.1",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_analytics:12.1.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_translational_research:3.3.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_translational_research:3.3.2:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_translational_research:3.4.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:healthcare_translational_research:3.2.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:oss_support_tools:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "2.12.41",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_revenue_management_and_billing_analytics:2.7:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:financial_services_revenue_management_and_billing_analytics:2.8:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:health_sciences_inform:6.3.0:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.9.0.0.0:*:*:*:enterprise:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:communications_eagle_application_processor:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "16.4.0",
"versionStartIncluding": "16.1.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:oracle:storagetek_acsls:8.5.1:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": false
}
],
"operator": "OR"
}
],
"cpe_match": [],
"operator": "AND"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:netapp:snap_creator_framework:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:snapcenter_server:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:oncommand_system_manager:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndIncluding": "3.1.3",
"versionStartIncluding": "3.0",
"vulnerable": true
},
{
"cpe23Uri": "cpe:2.3:a:netapp:max_data:-:*:*:*:*:*:*:*",
"cpe_name": [],
"vulnerable": true
}
],
"operator": "OR"
},
{
"children": [],
"cpe_match": [
{
"cpe23Uri": "cpe:2.3:a:tenable:log_correlation_engine:*:*:*:*:*:*:*:*",
"cpe_name": [],
"versionEndExcluding": "6.0.9",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "171213"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
}
],
"trust": 1.2
},
"cve": "CVE-2020-11023",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"acInsufInfo": false,
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"severity": "MEDIUM",
"trust": 1.0,
"userInteractionRequired": true,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Medium",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 4.3,
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2020-11023",
"impactScore": null,
"integrityImpact": "Partial",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.8,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-163560",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "NVD",
"availabilityImpact": "NONE",
"baseScore": 6.1,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 2.8,
"impactScore": 2.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
"version": "3.1"
},
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "security-advisories@github.com",
"availabilityImpact": "NONE",
"baseScore": 6.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.6,
"impactScore": 4.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:L/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 6.1,
"baseSeverity": "Medium",
"confidentialityImpact": "Low",
"exploitabilityScore": null,
"id": "CVE-2020-11023",
"impactScore": null,
"integrityImpact": "Low",
"privilegesRequired": "None",
"scope": "Changed",
"trust": 0.8,
"userInteraction": "Required",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "NVD",
"id": "CVE-2020-11023",
"trust": 1.8,
"value": "MEDIUM"
},
{
"author": "security-advisories@github.com",
"id": "CVE-2020-11023",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202004-2420",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-163560",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery Contains a cross-site scripting vulnerability.Information may be obtained and information may be tampered with. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. Summary:\n\nAn update for the pki-core:10.6 and pki-deps:10.6 modules is now available\nfor Red Hat Enterprise Linux 8. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Public Key Infrastructure (PKI) Core contains fundamental packages\nrequired by Red Hat Certificate System. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.3 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1376706 - restore SerialNumber tag in caManualRenewal xml\n1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests\n1406505 - KRA ECC installation failed with shared tomcat\n1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute\n1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip\n1666907 - CC: Enable AIA OCSP cert checking for entire cert chain\n1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute\n1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute\n1695901 - CVE-2019-10179 pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA\u0027s DRM agent page in authorize recovery tab\n1701972 - CVE-2019-11358 jquery: Prototype pollution in object\u0027s prototype leading to denial of service, remote code execution, or property injection\n1706521 - CA - SubjectAltNameExtInput does not display text fields to the enrollment page\n1710171 - CVE-2019-10146 pki-core: Reflected XSS in \u0027path length\u0027 constraint field in CA\u0027s Agent page\n1721684 - Rebase pki-servlet-engine to 9.0.30\n1724433 - caTransportCert.cfg contains MD2/MD5withRSA as signingAlgsAllowed. \n1732565 - CVE-2019-10221 pki-core: Reflected XSS in getcookies?url= endpoint in CA\n1732981 - When nuxwdog is enabled pkidaemon status shows instances as stopped. \n1777579 - CVE-2020-1721 pki-core: KRA vulnerable to reflected XSS via the getPk12 page\n1805541 - [RFE] CA Certificate Transparency with Embedded Signed Certificate Time stamp\n1817247 - Upgrade to 10.8.3 breaks PKI Tomcat Server\n1821851 - [RFE] Provide SSLEngine via JSSProvider for use with PKI\n1822246 - JSS - NativeProxy never calls releaseNativeResources - Memory Leak\n1824939 - JSS: add RSA PSS support - RHEL 8.3\n1824948 - add RSA PSS support - RHEL 8.3\n1825998 - CertificatePoliciesExtDefault MAX_NUM_POLICIES hardcoded limit\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1842734 - CVE-2019-10179 pki-core: pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA\u0027s DRM agent page in authorize recovery tab [rhel-8]\n1842736 - CVE-2019-10146 pki-core: Reflected Cross-Site Scripting in \u0027path length\u0027 constraint field in CA\u0027s Agent page [rhel-8]\n1843537 - Able to Perform PKI CLI operations like cert request and approval without nssdb password\n1845447 - pkispawn fails in FIPS mode: AJP connector has secretRequired=\"true\" but no secret\n1850004 - CVE-2020-11023 jquery: Passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1854043 - /usr/bin/PrettyPrintCert is failing with a ClassNotFoundException\n1854959 - ca-profile-add with Netscape extensions nsCertSSLClient and nsCertEmail in the profile gets stuck in processing\n1855273 - CVE-2020-15720 pki: Dogtag\u0027s python client does not validate certificates\n1855319 - Not able to launch pkiconsole\n1856368 - kra-key-generate request is failing\n1857933 - CA Installation is failing with ncipher v12.30 HSM\n1861911 - pki cli ca-cert-request-approve hangs over crmf request from client-cert-request\n1869893 - Common certificates are missing in CS.cfg on shared PKI instance\n1871064 - replica install failing during pki-ca component configuration\n1873235 - pki ca-user-cert-add with secure port failed with \u0027SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT\u0027\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: RHV Manager (ovirt-engine) [ovirt-4.5.2] bug fix and security update\nAdvisory ID: RHSA-2022:6393-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6393\nIssue date: 2022-09-08\nCVE Names: CVE-2020-11022 CVE-2020-11023 CVE-2021-22096\n CVE-2021-23358 CVE-2022-2806 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated ovirt-engine packages that fix several bugs and add various\nenhancements are now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch\n\n3. Description:\n\nThe ovirt-engine package provides the Red Hat Virtualization Manager, a\ncentralized management platform that allows system administrators to view\nand manage virtual machines. The Manager provides a comprehensive range of\nfeatures including search capabilities, resource management, live\nmigrations, and virtual infrastructure provisioning. \n\nSecurity Fix(es):\n\n* nodejs-underscore: Arbitrary code execution via the template function\n(CVE-2021-23358)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* jquery: Cross-site scripting due to improper injQuery.htmlPrefilter\nmethod (CVE-2020-11022)\n\n* jquery: Untrusted code execution via \u003coption\u003e tag in HTML passed to DOM\nmanipulation methods (CVE-2020-11023)\n\n* ovirt-log-collector: RHVM admin password is logged unfiltered\n(CVE-2022-2806)\n\n* springframework: malicious input leads to insertion of additional log\nentries (CVE-2021-22096)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* Previously, running engine-setup did not always renew OVN certificates\nclose to expiration or expired. With this release, OVN certificates are\nalways renewed by engine-setup when needed. (BZ#2097558)\n\n* Previously, the Manager issued warnings of approaching certificate\nexpiration before engine-setup could update certificates. In this release\nexpiration warnings and certificate update periods are aligned, and\ncertificates are updated as soon as expiration warnings occur. (BZ#2097725)\n\n* With this release, OVA export or import work on hosts with a non-standard\nSSH port. (BZ#2104939)\n\n* With this release, the certificate validity test is compatible with RHEL\n8 and RHEL 7 based hypervisors. (BZ#2107250)\n\n* RHV 4.4 SP1 and later are only supported on RHEL 8.6, customers cannot\nuse RHEL 8.7 or later, and must stay with RHEL 8.6 EUS. (BZ#2108985)\n\n* Previously, importing templates from the Administration Portal did not\nwork. With this release, importing templates from the Administration Portal\nis possible. (BZ#2109923)\n\n* ovirt-provider-ovn certificate expiration is checked along with other RHV\ncertificates. If ovirt-provider-ovn is about to expire or already expired,\na warning or alert is raised in the audit log. To renew the\novirt-provider-ovn certificate, administators must run engine-setup. If\nyour ovirt-provider-ovn certificate expires on a previous RHV version,\nupgrade to RHV 4.4 SP1 batch 2 or later, and ovirt-provider-ovn certificate\nwill be renewed automatically in the engine-setup. (BZ#2097560)\n\n* Previously, when importing a virtual machine with manual CPU pinning, the\nmanual pinning string was cleared, but the CPU pinning policy was not set\nto NONE. As a result, importing failed. In this release, the CPU pinning\npolicy is set to NONE if the CPU pinning string is cleared, and importing\nsucceeds. (BZ#2104115)\n\n* Previously, the Manager could start a virtual machine with a Resize and\nPin NUMA policy on a host without an equal number of physical sockets to\nNUMA nodes. As a result, wrong pinning was assigned to the policy. With\nthis release, the Manager does not allow the virtual machine to be\nscheduled on such a virtual machine, and the pinning is correct based on\nthe algorithm. (BZ#1955388)\n\n* Rebase package(s) to version: 4.4.7. \nHighlights, important fixes, or notable enhancements: fixed BZ#2081676\n(BZ#2104831)\n\n* In this release, rhv-log-collector-analyzer provides detailed output for\neach problematic image, including disk names, associated virtual machine,\nthe host running the virtual machine, snapshots, and current SPM. The\ndetailed view is now the default. The compact option can be set by using\nthe --compact switch in the command line. (BZ#2097536)\n\n* UnboundID LDAP SDK has been rebased on upstream version 6.0.4. See\nhttps://github.com/pingidentity/ldapsdk/releases for changes since version\n4.0.14 (BZ#2092478)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. \n1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function\n1955388 - Auto Pinning Policy only pins some of the vCPUs on a single NUMA host\n1974974 - Not possible to determine migration policy from the API, even though documentation reports that it can be done. \n2034584 - CVE-2021-22096 springframework: malicious input leads to insertion of additional log entries\n2080005 - CVE-2022-2806 ovirt-log-collector: RHVM admin password is logged unfiltered\n2092478 - Upgrade unboundid-ldapsdk to 6.0.4\n2094577 - rhv-image-discrepancies must ignore small disks created by OCP\n2097536 - [RFE] Add disk name and uuid to problems output\n2097558 - Renew ovirt-provider-ovn.cer certificates during engine-setup\n2097560 - Warning when ovsdb-server certificates are about to expire(OVN certificate)\n2097725 - Certificate Warn period and automatic renewal via engine-setup do not match\n2104115 - RHV 4.5 cannot import VMs with cpu pinning\n2104831 - Upgrade ovirt-log-collector to 4.4.7\n2104939 - Export OVA when using host with port other than 22\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2107250 - Upgrade of the host failed as the RHV 4.3 hypervisor is based on RHEL 7 with openssl 1.0.z, but RHV Manager 4.4 uses the openssl 1.1.z syntax\n2107267 - ovirt-log-collector doesn\u0027t generate database dump\n2108985 - RHV 4.4 SP1 EUS requires RHEL 8.6 EUS (RHEL 8.7+ releases are not supported on RHV 4.4 SP1 EUS)\n2109923 - Error when importing templates in Admin portal\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\novirt-engine-4.5.2.4-0.1.el8ev.src.rpm\novirt-engine-dwh-4.5.4-1.el8ev.src.rpm\novirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.3.5-1.el8ev.src.rpm\novirt-log-collector-4.4.7-2.el8ev.src.rpm\novirt-web-ui-1.9.1-1.el8ev.src.rpm\nrhv-log-collector-analyzer-1.0.15-1.el8ev.src.rpm\nunboundid-ldapsdk-6.0.4-1.el8ev.src.rpm\nvdsm-jsonrpc-java-1.7.2-1.el8ev.src.rpm\n\nnoarch:\novirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-backend-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-dbscripts-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-dwh-4.5.4-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.5.4-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.5.4-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-setup-1.4.6-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-restapi-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-base-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-tools-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-tools-backup-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.3.5-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-log-collector-4.4.7-2.el8ev.noarch.rpm\novirt-web-ui-1.9.1-1.el8ev.noarch.rpm\npython3-ovirt-engine-lib-4.5.2.4-0.1.el8ev.noarch.rpm\nrhv-log-collector-analyzer-1.0.15-1.el8ev.noarch.rpm\nrhvm-4.5.2.4-0.1.el8ev.noarch.rpm\nunboundid-ldapsdk-6.0.4-1.el8ev.noarch.rpm\nunboundid-ldapsdk-javadoc-6.0.4-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-1.7.2-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-javadoc-1.7.2-1.el8ev.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-11022\nhttps://access.redhat.com/security/cve/CVE-2020-11023\nhttps://access.redhat.com/security/cve/CVE-2021-22096\nhttps://access.redhat.com/security/cve/CVE-2021-23358\nhttps://access.redhat.com/security/cve/CVE-2022-2806\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqRtzjgjWX9erEAQiQOw//XOS172gkbNeuoMSW1IYiEpJG4zQIvT2J\nVvyizOMlQzpe49Bkopu1zj/e8yM1eXNIg1elPzA3280z7ruNb4fkeoXT7vM5mB/0\njRAr1ja9ZHnZmEW60X3WVhEBjEXCeOv5CWBgqzdQWSB7RpPqfMP7/4kHGFnCPZxu\nV/n+Z9YKoDxeiW19tuTdU5E5cFySVV8JZAlfXlrR1dz815Ugsm2AMk6uPwjQ2+C7\nUz3zLQLjRjxFk+qSph8NYbOZGnUkypWQG5KXPMyk/Cg3jewjMkjAhzgcTJAdolRC\nq3p9kD5KdWRe+3xzjy6B4IsSSqvEyHphwrRv8wgk0vIAawfgi76+jL7n/C07rdpA\nQg6zlDxmHDrZPC42dsW6dXJ1QefRQE5EzFFJcoycqvWdlRfXX6D1RZc5knSQb2iI\n3iSh+hVwxY9pzNZVMlwtDHhw8dqvgw7JimToy8vOldgK0MdndwtVmKsKsRzu7HyL\nPQSvcN5lSv1X5FR2tnx9LMQXX1qn0P1d/8gTiRFm8Oabjx2r8I0/HNgnJpTSVSBO\nDXjKFDmwpiT+6tupM39ZbWek2hh+PoyMZJb/d6/YTND6VNlzUypq+DFtLILEaM8Z\nOjWz0YAL8/ihvhq0vSdFSMFcYKSWAOXA+6pSqe7N7WtB9hl0r7sLUaRSRHti1Ime\nuF/GLDTKkPw=8zTJ\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak\nproject, that provides authentication and standards-based single sign-on\ncapabilities for web and mobile applications. Description:\n\nRed Hat JBoss Enterprise Application Platform 7 is a platform for Java\napplications based on the WildFly application runtime. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. JIRA issues fixed (https://issues.jboss.org/):\n\nJBEAP-23864 - (7.4.z) Upgrade xmlsec from 2.1.7.redhat-00001 to 2.2.3.redhat-00001\nJBEAP-23865 - [GSS](7.4.z) Upgrade Apache CXF from 3.3.13.redhat-00001 to 3.4.10.redhat-00001\nJBEAP-23866 - (7.4.z) Upgrade wss4j from 2.2.7.redhat-00001 to 2.3.3.redhat-00001\nJBEAP-23926 - Tracker bug for the EAP 7.4.9 release for RHEL-7\nJBEAP-24055 - (7.4.z) Upgrade HAL from 3.3.15.Final-redhat-00001 to 3.3.16.Final-redhat-00001\nJBEAP-24081 - (7.4.z) Upgrade Elytron from 1.15.14.Final-redhat-00001 to 1.15.15.Final-redhat-00001\nJBEAP-24095 - (7.4.z) Upgrade elytron-web from 1.9.2.Final-redhat-00001 to 1.9.3.Final-redhat-00001\nJBEAP-24100 - [GSS](7.4.z) Upgrade Undertow from 2.2.20.SP1-redhat-00001 to 2.2.22.SP3-redhat-00001\nJBEAP-24127 - (7.4.z) UNDERTOW-2123 - Update AsyncContextImpl.dispatch to use proper value\nJBEAP-24128 - (7.4.z) Upgrade Hibernate Search from 5.10.7.Final-redhat-00001 to 5.10.13.Final-redhat-00001\nJBEAP-24132 - [GSS](7.4.z) Upgrade Ironjacamar from 1.5.3.SP2-redhat-00001 to 1.5.10.Final-redhat-00001\nJBEAP-24147 - (7.4.z) Upgrade jboss-ejb-client from 4.0.45.Final-redhat-00001 to 4.0.49.Final-redhat-00001\nJBEAP-24167 - (7.4.z) Upgrade WildFly Core from 15.0.19.Final-redhat-00001 to 15.0.21.Final-redhat-00002\nJBEAP-24191 - [GSS](7.4.z) Upgrade remoting from 5.0.26.SP1-redhat-00001 to 5.0.27.Final-redhat-00001\nJBEAP-24195 - [GSS](7.4.z) Upgrade JSF API from 3.0.0.SP06-redhat-00001 to 3.0.0.SP07-redhat-00001\nJBEAP-24207 - (7.4.z) Upgrade Soteria from 1.0.1.redhat-00002 to 1.0.1.redhat-00003\nJBEAP-24248 - (7.4.z) ELY-2492 - Upgrade sshd-common in Elytron from 2.7.0 to 2.9.2\nJBEAP-24426 - (7.4.z) Upgrade Elytron from 1.15.15.Final-redhat-00001 to 1.15.16.Final-redhat-00001\nJBEAP-24427 - (7.4.z) Upgrade WildFly Core from 15.0.21.Final-redhat-00002 to 15.0.22.Final-redhat-00001\n\n7",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11023"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "171213"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170817"
}
],
"trust": 2.25
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-11023",
"trust": 3.9
},
{
"db": "PACKETSTORM",
"id": "162160",
"trust": 1.7
},
{
"db": "TENABLE",
"id": "TNS-2021-02",
"trust": 1.7
},
{
"db": "TENABLE",
"id": "TNS-2021-10",
"trust": 1.7
},
{
"db": "PACKETSTORM",
"id": "159852",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170821",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "168304",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU99394498",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU94912830",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-21-306-01",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170823",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "162651",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "160274",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159275",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "161727",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "161830",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "158797",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "160548",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164887",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "158750",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159513",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "158555",
"trust": 0.7
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2020.2694",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0620",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0845",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3823",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4248",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2714",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3700",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1351",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2775",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1066",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1916",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3485",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3663",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0909",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1961",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0583",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1653",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0585",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1863",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1519",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0824",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2375",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3255",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0923",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1703",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5150",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2525",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1804",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3875",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2660",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1512",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2660.3",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4421",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2287",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "158406",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "158282",
"trust": 0.6
},
{
"db": "NSFOCUS",
"id": "48902",
"trust": 0.6
},
{
"db": "LENOVO",
"id": "LEN-60182",
"trust": 0.6
},
{
"db": "EXPLOIT-DB",
"id": "49767",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021110301",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022012403",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022022516",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021072824",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021052207",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072027",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022011837",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021042101",
"trust": 0.6
},
{
"db": "ICS CERT",
"id": "ICSA-22-097-01",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "171213",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171212",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170817",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171214",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170819",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-163560",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "171213"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"id": "VAR-202004-2199",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
}
],
"trust": 0.01
},
"last_update_date": "2024-07-23T19:20:16.457000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "hitachi-sec-2020-130 Software product security information",
"trust": 0.8,
"url": "https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/"
},
{
"title": "jQuery Fixes for cross-site scripting vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=178501"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-79",
"trust": 1.1
},
{
"problemtype": "Cross-site scripting (CWE-79) [NVD Evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.3,
"url": "http://packetstormsecurity.com/files/162160/jquery-1.0.3-cross-site-scripting.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
},
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 2.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023"
},
{
"trust": 1.7,
"url": "https://github.com/jquery/jquery/security/advisories/ghsa-jpcq-cgw6-v4j6"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20200511-0006/"
},
{
"trust": 1.7,
"url": "https://www.drupal.org/sa-core-2020-002"
},
{
"trust": 1.7,
"url": "https://www.tenable.com/security/tns-2021-02"
},
{
"trust": 1.7,
"url": "https://www.tenable.com/security/tns-2021-10"
},
{
"trust": 1.7,
"url": "https://www.debian.org/security/2020/dsa-4693"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202007-03"
},
{
"trust": 1.7,
"url": "https://blog.jquery.com/2020/04/10/jquery-3-5-0-released"
},
{
"trust": 1.7,
"url": "https://jquery.com/upgrade-guide/3.5/"
},
{
"trust": 1.7,
"url": "https://www.oracle.com//security-alerts/cpujul2021.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujan2021.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujul2020.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2021/03/msg00033.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00067.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00085.html"
},
{
"trust": 1.7,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00039.html"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r0593393ca1e97b1e7e098fe69d414d6bd0a467148e9138d07e86ebbb%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r07ab379471fb15644bf7a92e4a98cbc7df3cf4e736abae0cc7625fe6%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r094f435595582f6b5b24b66fedf80543aa8b1d57a3688fbcc21f06ec%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r1fed19c860a0d470f2a3eded12795772c8651ff583ef951ddac4918c%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r2c85121a47442036c7f8353a3724aa04f8ecdfda1819d311ba4f5330%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r3702ede0ff83a29ba3eb418f6f11c473d6e3736baba981a8dbd9c9ef%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r4aadb98086ca72ed75391f54167522d91489a0d0ae25b12baa8fc7c5%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r4dba67be3239b34861f1b9cfdf9dfb3a90272585dcce374112ed6e16%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r55f5e066cc7301e3630ce90bbbf8d28c82212ae1f2d4871012141494%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r6c4df3b33e625a44471009a172dabe6865faec8d8f21cac2303463b1%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r6e97b37963926f6059ecc1e417721608723a807a76af41d4e9dbed49%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r9006ad2abf81d02a0ef2126bab5177987e59095b7194a487c4ea247c%40%3ccommits.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r9c5fda81e4bca8daee305b4c03283dddb383ab8428a151d4cb0b3b15%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r9e0bd31b7da9e7403478d22652b8760c946861f8ebd7bd750844898e%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra32c7103ded9041c7c1cb8c12c8d125a6b2f3f3270e2937ef8417fac%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra374bb0299b4aa3e04edde01ebc03ed6f90cf614dad40dd428ce8f72%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra3c9219fcb0b289e18e9ec5a5ebeaa5c17d6b79a201667675af6721c%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra406b3adfcffcb5ce8707013bdb7c35e3ffc2776a8a99022f15274c6%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rab82dd040f302018c85bd07d33f5604113573514895ada523c3401d9%40%3ccommits.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/radcb2aa874a79647789f3563fcbbceaf1045a029ee8806b59812a8ea%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rb25c3bc7418ae75cba07988dafe1b6912f76a9dd7d94757878320d61%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rb69b7d8217c1a6a2100247a5d06ce610836b31e3f5d73fc113ded8e7%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67%40%3cdev.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rd38b4185a797b324c8dd940d9213cf99fcdc2dbf1fc5a63ba7dee8c9%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rda99599896c3667f2cc9e9d34c7b6ef5d2bbed1f4801e1d75a2b0679%40%3ccommits.nifi.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf0f8939596081d84be1ae6a91d6248b96a02d8388898c372ac807817%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf1ba79e564fe7efc56aef7c986106f1cf67a3427d08e997e088e7a93%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf661a90a15da8da5922ba6127b3f5f8194d4ebec8855d60a0dd13248%40%3cdev.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.debian.org/debian-lts-announce/2023/08/msg00040.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99394498/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu94912830/"
},
{
"trust": 0.8,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-306-01"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r9006ad2abf81d02a0ef2126bab5177987e59095b7194a487c4ea247c@%3ccommits.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r07ab379471fb15644bf7a92e4a98cbc7df3cf4e736abae0cc7625fe6@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r3702ede0ff83a29ba3eb418f6f11c473d6e3736baba981a8dbd9c9ef@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rf0f8939596081d84be1ae6a91d6248b96a02d8388898c372ac807817@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r9e0bd31b7da9e7403478d22652b8760c946861f8ebd7bd750844898e@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r2c85121a47442036c7f8353a3724aa04f8ecdfda1819d311ba4f5330@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r4dba67be3239b34861f1b9cfdf9dfb3a90272585dcce374112ed6e16@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r55f5e066cc7301e3630ce90bbbf8d28c82212ae1f2d4871012141494@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67@%3cdev.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rab82dd040f302018c85bd07d33f5604113573514895ada523c3401d9@%3ccommits.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rf661a90a15da8da5922ba6127b3f5f8194d4ebec8855d60a0dd13248@%3cdev.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/ra3c9219fcb0b289e18e9ec5a5ebeaa5c17d6b79a201667675af6721c@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/ra374bb0299b4aa3e04edde01ebc03ed6f90cf614dad40dd428ce8f72@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rb25c3bc7418ae75cba07988dafe1b6912f76a9dd7d94757878320d61@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rf1ba79e564fe7efc56aef7c986106f1cf67a3427d08e997e088e7a93@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/ra32c7103ded9041c7c1cb8c12c8d125a6b2f3f3270e2937ef8417fac@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r1fed19c860a0d470f2a3eded12795772c8651ff583ef951ddac4918c@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r094f435595582f6b5b24b66fedf80543aa8b1d57a3688fbcc21f06ec@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r9c5fda81e4bca8daee305b4c03283dddb383ab8428a151d4cb0b3b15@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r6e97b37963926f6059ecc1e417721608723a807a76af41d4e9dbed49@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rb69b7d8217c1a6a2100247a5d06ce610836b31e3f5d73fc113ded8e7@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rd38b4185a797b324c8dd940d9213cf99fcdc2dbf1fc5a63ba7dee8c9@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/radcb2aa874a79647789f3563fcbbceaf1045a029ee8806b59812a8ea@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r4aadb98086ca72ed75391f54167522d91489a0d0ae25b12baa8fc7c5@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/ra406b3adfcffcb5ce8707013bdb7c35e3ffc2776a8a99022f15274c6@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r0593393ca1e97b1e7e098fe69d414d6bd0a467148e9138d07e86ebbb@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/r6c4df3b33e625a44471009a172dabe6865faec8d8f21cac2303463b1@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rda99599896c3667f2cc9e9d34c7b6ef5d2bbed1f4801e1d75a2b0679@%3ccommits.nifi.apache.org%3e"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-11023"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-11022"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021110301"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159513/red-hat-security-advisory-2020-4211-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-kenexa-lcms-premier-on-premise-all-jquery-publicly-disclosed-vulnerability-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4248/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022011837"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3823"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2287/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158797/red-hat-security-advisory-2020-3369-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161727/red-hat-security-advisory-2021-0778-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159275/red-hat-security-advisory-2020-3807-01.html"
},
{
"trust": 0.6,
"url": "https://www.oracle.com/security-alerts/cpujul2021.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161830/red-hat-security-advisory-2021-0860-01.html"
},
{
"trust": 0.6,
"url": "https://www.exploit-db.com/exploits/49767"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162651/red-hat-security-advisory-2021-1846-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3875/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-jquery-vulnerabilities-affect-ibm-emptoris-strategic-supply-management-platform-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6520510"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158555/gentoo-linux-security-advisory-202007-03.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-jquery-as-used-by-ibm-qradar-network-packet-capture-is-vulnerable-to-cross-site-scripting-xss-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1653"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-10/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-8/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0923"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2694/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158282/red-hat-security-advisory-2020-2813-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2375/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2775/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1066"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-jquery-affect-ibm-license-metric-tool-v9/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5150"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168304/red-hat-security-advisory-2022-6393-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1804/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/160274/red-hat-security-advisory-2020-5249-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0824"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-verify-information-queue-uses-a-node-js-package-with-known-vulnerabilities-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021042101"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1961/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1512"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-cross-site-scripting-vulnerabilities-in-jquery-might-affect-ibm-business-automation-workflow-and-ibm-business-process-manager-bpm-cve-2020-7656-cve-2020-11022-cve-2020-11023-2/"
},
{
"trust": 0.6,
"url": "http://www.nsfocus.net/vulndb/48902"
},
{
"trust": 0.6,
"url": "https://support.lenovo.com/us/en/product_security/len-60182"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilites-affect-ibm-jazz-foundation-and-ibm-engineering-products-5/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022022516"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158750/red-hat-security-advisory-2020-3247-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-jquery-as-used-in-ibm-security-qradar-packet-capture-is-vulnerable-to-cross-site-scripting-xss-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1703"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2714/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6525182"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158406/red-hat-security-advisory-2020-2412-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-tivoli-netcool-impact-is-affected-by-jquery-vulnerabilities-cve-2020-11022-cve-2020-11023/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/160548/red-hat-security-advisory-2020-5412-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2660.3/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-api-connect-is-impacted-by-vulnerabilities-in-drupal-cve-2020-11022-cve-2020-11023/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-kenexa-lms-on-premise-all-jquery-publicly-disclosed-vulnerability-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-planning-analytics-workspace-is-affected-by-security-vulnerabilities-3/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1863/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-api-connect-is-impacted-by-vulnerabilities-in-drupal-cve-2020-11022-cve-2020-11023-2/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-jquery-fixed-in-mobile-foundation-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-jquery-affect-ibm-wiotp-messagegateway-cve-2020-11023-cve-2020-11022/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3700/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1916"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1519"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072027"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0909"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-security-vulnerabilities-have-been-fixed-in-ibm-security-identity-manager-virtual-appliance/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021052207"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170821/red-hat-security-advisory-2023-0552-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0585"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159852/red-hat-security-advisory-2020-4847-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2525"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2660/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4421/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0620"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1351"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0583"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-license-key-server-administration-and-reporting-tool-is-impacted-by-multiple-vulnerabilities-in-jquery-bootstrap-and-angularjs/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerability-issues-affect-ibm-spectrum-symphony-7-3-1/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022012403"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-jquery-spring-dom4j-mongodb-linux-kernel-targetcli-fb-jackson-node-js-and-apache-commons-affect-ibm-spectrum-protect-plus/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072824"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-cross-site-scripting-vulnerabilities-in-jquery-might-affect-ibm-business-automation-workflow-and-ibm-business-process-manager-bpm-cve-2020-7656-cve-2020-11022-cve-2020-11023/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3663"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-097-01"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3255/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164887/red-hat-security-advisory-2021-4142-02.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-security-vulnerability-has-been-identified-in-bigfix-platform-shipped-with-ibm-license-metric-tool-2/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170823/red-hat-security-advisory-2023-0553-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3485/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2018-14042"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2018-14040"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14042"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-11358"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11358"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14040"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.5,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-40150"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-40149"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-45047"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-46364"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-45693"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2015-9251"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-8331"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10735"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-9251"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2016-10735"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8331"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3916"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46175"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0091"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-4137"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46363"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0264"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1274"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38749"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1274"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3143"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/installation_guide/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14041"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40150"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18214"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40152"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40149"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40152"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-14041"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2017-18214"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3143"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10146"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15720"
},
{
"trust": 0.1,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10146"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10179"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10179"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4847"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22096"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6393"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22096"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23358"
},
{
"trust": 0.1,
"url": "https://github.com/pingidentity/ldapsdk/releases"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2237"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1049"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2237"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1043"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0552"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0556"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?downloadtype=securitypatches\u0026product=appplatform\u0026version=7.4"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "171213"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "171213"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2020-04-29T00:00:00",
"db": "VULHUB",
"id": "VHN-163560"
},
{
"date": "2020-06-05T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"date": "2020-11-04T15:29:15",
"db": "PACKETSTORM",
"id": "159852"
},
{
"date": "2022-09-08T14:41:25",
"db": "PACKETSTORM",
"id": "168304"
},
{
"date": "2023-03-02T15:19:28",
"db": "PACKETSTORM",
"id": "171213"
},
{
"date": "2023-03-02T15:19:19",
"db": "PACKETSTORM",
"id": "171212"
},
{
"date": "2023-01-31T17:21:40",
"db": "PACKETSTORM",
"id": "170821"
},
{
"date": "2023-01-31T17:16:43",
"db": "PACKETSTORM",
"id": "170817"
},
{
"date": "2020-04-29T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2420"
},
{
"date": "2020-04-29T21:15:11.743000",
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2023-02-03T00:00:00",
"db": "VULHUB",
"id": "VHN-163560"
},
{
"date": "2022-02-16T03:20:00",
"db": "JVNDB",
"id": "JVNDB-2020-005056"
},
{
"date": "2023-03-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2420"
},
{
"date": "2023-11-07T03:14:27.553000",
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
}
],
"trust": 0.6
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "jQuery\u00a0 Cross-site Scripting Vulnerability",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005056"
}
],
"trust": 0.8
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "XSS",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2420"
}
],
"trust": 0.6
}
}